Brain Development & Education Lab Wiki
bde_wiki
http://depts.washington.edu/bdelab/old_wiki/index.php?title=Main_Page
MediaWiki 1.25.2
first-letter
Media
Special
Talk
User
User talk
Bdelabwiki
Bdelabwiki talk
File
File talk
MediaWiki
MediaWiki talk
Template
Template talk
Help
Help talk
Category
Category talk
Main Page
0
1
1
2015-08-13T18:37:13Z
MediaWiki default
0
wikitext
text/x-wiki
<strong>MediaWiki has been successfully installed.</strong>
Consult the [//meta.wikimedia.org/wiki/Help:Contents User's Guide] for information on using the wiki software.
== Getting started ==
* [//www.mediawiki.org/wiki/Special:MyLanguage/Manual:Configuration_settings Configuration settings list]
* [//www.mediawiki.org/wiki/Special:MyLanguage/Manual:FAQ MediaWiki FAQ]
* [https://lists.wikimedia.org/mailman/listinfo/mediawiki-announce MediaWiki release mailing list]
* [//www.mediawiki.org/wiki/Special:MyLanguage/Localisation#Translation_resources Localise MediaWiki for your language]
glba3g2evzm40dqnqxegze66eqibkvb
2
1
2015-08-13T18:45:14Z
Jyeatman
1
wikitext
text/x-wiki
<strong>Brain Development & Education Lab Wiki</strong>
Consult the [//meta.wikimedia.org/wiki/Help:Contents User's Guide] for information on using the wiki software.
== Getting started ==
* [//www.mediawiki.org/wiki/Special:MyLanguage/Manual:Configuration_settings Configuration settings list]
* [//www.mediawiki.org/wiki/Special:MyLanguage/Manual:FAQ MediaWiki FAQ]
* [https://lists.wikimedia.org/mailman/listinfo/mediawiki-announce MediaWiki release mailing list]
* [//www.mediawiki.org/wiki/Special:MyLanguage/Localisation#Translation_resources Localise MediaWiki for your language]
0qc7igvce7pzzc5hcvg7wymgenlh3ov
Anatomy Pipeline
0
5
17
2015-08-14T19:40:02Z
Jyeatman
1
Created page with "We collect a high resolution T1-weighted image on every subject, and use this image to define the coordinate space for all subsequent analyses. This section describes the proc..."
wikitext
text/x-wiki
We collect a high resolution T1-weighted image on every subject, and use this image to define the coordinate space for all subsequent analyses. This section describes the processing steps for a subject's T1-weighted anatomy and should be performed before analyzing the rest of their MRI data.
=AC-PC Alignment=
Data can come off the scanner with arbitrary header information and the subject might not be properly position. So for each subject we start by defining a coordinate frame where 0,0,0 is at the anterior commissure, the anterior and posterior commissure are in the same X and Z planes, and the mid-line is centered in the image. Bob Dougherty wrote a nice tool to help with this. See [[mrAnatAverageAcpcNifti https://github.com/vistalab/vistasoft/blob/master/mrAnatomy/VolumeUtilities/mrAnatAverageAcpcNifti.m]]
btk7ggxj8b44lw4scmlkwo27gmyqpsh
18
17
2015-08-14T19:42:15Z
Jyeatman
1
wikitext
text/x-wiki
We collect a high resolution T1-weighted image on every subject, and use this image to define the coordinate space for all subsequent analyses. This section describes the processing steps for a subject's T1-weighted anatomy and should be performed before analyzing the rest of their MRI data.
=AC-PC Alignment=
Data can come off the scanner with arbitrary header information and the subject might not be properly position. So for each subject we start by defining a coordinate frame where 0,0,0 is at the anterior commissure, the anterior and posterior commissure are in the same X and Z planes, and the mid-line is centered in the image. Bob Dougherty wrote a nice tool to help with this. See [[ https://github.com/vistalab/vistasoft/blob/master/mrAnatomy/VolumeUtilities/mrAnatAverageAcpcNifti.m mrAnatAverageAcpcNifti]]
acbj3z763qboiszmhml3nbqtx733npv
19
18
2015-08-14T19:42:53Z
Jyeatman
1
wikitext
text/x-wiki
We collect a high resolution T1-weighted image on every subject, and use this image to define the coordinate space for all subsequent analyses. This section describes the processing steps for a subject's T1-weighted anatomy and should be performed before analyzing the rest of their MRI data.
=AC-PC Alignment=
Data can come off the scanner with arbitrary header information and the subject might not be properly position. So for each subject we start by defining a coordinate frame where 0,0,0 is at the anterior commissure, the anterior and posterior commissure are in the same X and Z planes, and the mid-line is centered in the image. Bob Dougherty wrote a nice tool to help with this. See [https://github.com/vistalab/vistasoft/blob/master/mrAnatomy/VolumeUtilities/mrAnatAverageAcpcNifti.m mrAnatAverageAcpcNifti]
4s8ydnvp4x07pymu9wm9w4pgkx7h8q3
20
19
2015-08-14T19:48:03Z
Jyeatman
1
wikitext
text/x-wiki
We collect a high resolution T1-weighted image on every subject, and use this image to define the coordinate space for all subsequent analyses. This section describes the processing steps for a subject's T1-weighted anatomy and should be performed before analyzing the rest of their MRI data.
=AC-PC Alignment=
Data can come off the scanner with arbitrary header information and the subject might not be properly position. So for each subject we start by defining a coordinate frame where 0,0,0 is at the anterior commissure, the anterior and posterior commissure are in the same X and Z planes, and the mid-line is centered in the image. Bob Dougherty wrote a nice tool to help with this. See [https://github.com/vistalab/vistasoft/blob/master/mrAnatomy/VolumeUtilities/mrAnatAverageAcpcNifti.m mrAnatAverageAcpcNifti]. The subject's T1-weighted image should be ac-pc aligned, resliced (preserving its resolution), and saved in the subject's anatomy directory.
<syntaxhighlight lang="python">
test
</syntaxhighlight>
gp71bkplf8yyt05mgtbgk0e27evecmy
21
20
2015-08-14T19:49:40Z
Jyeatman
1
wikitext
text/x-wiki
We collect a high resolution T1-weighted image on every subject, and use this image to define the coordinate space for all subsequent analyses. This section describes the processing steps for a subject's T1-weighted anatomy and should be performed before analyzing the rest of their MRI data.
=AC-PC Alignment=
Data can come off the scanner with arbitrary header information and the subject might not be properly position. So for each subject we start by defining a coordinate frame where 0,0,0 is at the anterior commissure, the anterior and posterior commissure are in the same X and Z planes, and the mid-line is centered in the image. Bob Dougherty wrote a nice tool to help with this. See [https://github.com/vistalab/vistasoft/blob/master/mrAnatomy/VolumeUtilities/mrAnatAverageAcpcNifti.m mrAnatAverageAcpcNifti]. The subject's T1-weighted image should be ac-pc aligned, resliced (preserving its resolution), and saved in the subject's anatomy directory.
=Freesurfer Segmentation=
j6au9td7u2xnzh0ow775xzil5i5mjk7
22
21
2015-08-14T19:52:04Z
Jyeatman
1
wikitext
text/x-wiki
We collect a high resolution T1-weighted image on every subject, and use this image to define the coordinate space for all subsequent analyses. This section describes the processing steps for a subject's T1-weighted anatomy and should be performed before analyzing the rest of their MRI data.
==AC-PC Alignment==
Data can come off the scanner with arbitrary header information and the subject might not be properly position. So for each subject we start by defining a coordinate frame where 0,0,0 is at the anterior commissure, the anterior and posterior commissure are in the same X and Z planes, and the mid-line is centered in the image. Bob Dougherty wrote a nice tool to help with this. See [https://github.com/vistalab/vistasoft/blob/master/mrAnatomy/VolumeUtilities/mrAnatAverageAcpcNifti.m mrAnatAverageAcpcNifti]. The subject's T1-weighted image should be ac-pc aligned, resliced (preserving its resolution), and saved in the subject's anatomy directory.
==Freesurfer Segmentation==
__TOC__
mtqzqp4bj9zqwvrt93krgkaju86mysd
23
22
2015-08-14T19:52:29Z
Jyeatman
1
wikitext
text/x-wiki
__TOC__
We collect a high resolution T1-weighted image on every subject, and use this image to define the coordinate space for all subsequent analyses. This section describes the processing steps for a subject's T1-weighted anatomy and should be performed before analyzing the rest of their MRI data.
==AC-PC Alignment==
Data can come off the scanner with arbitrary header information and the subject might not be properly position. So for each subject we start by defining a coordinate frame where 0,0,0 is at the anterior commissure, the anterior and posterior commissure are in the same X and Z planes, and the mid-line is centered in the image. Bob Dougherty wrote a nice tool to help with this. See [https://github.com/vistalab/vistasoft/blob/master/mrAnatomy/VolumeUtilities/mrAnatAverageAcpcNifti.m mrAnatAverageAcpcNifti]. The subject's T1-weighted image should be ac-pc aligned, resliced (preserving its resolution), and saved in the subject's anatomy directory.
==Freesurfer Segmentation==
mboqp4uhk5equ2p874ypai51ykalu2e
24
23
2015-08-14T19:54:16Z
Jyeatman
1
wikitext
text/x-wiki
__TOC__
We collect a high resolution T1-weighted image on every subject, and use this image to define the coordinate space for all subsequent analyses. This section describes the processing steps for a subject's T1-weighted anatomy and should be performed before analyzing the rest of their MRI data.
==AC-PC Alignment==
Data can come off the scanner with arbitrary header information and the subject might not be properly position. So for each subject we start by defining a coordinate frame where 0,0,0 is at the anterior commissure, the anterior and posterior commissure are in the same X and Z planes, and the mid-line is centered in the image. Bob Dougherty wrote a nice tool to help with this. See [https://github.com/vistalab/vistasoft/blob/master/mrAnatomy/VolumeUtilities/mrAnatAverageAcpcNifti.m mrAnatAverageAcpcNifti]. The subject's T1-weighted image should be ac-pc aligned, resliced (preserving its resolution), and saved in the subject's anatomy directory.
parrec2nii
mrAnatAverageAcpcNifti
==Freesurfer Segmentation==
pldxifm993u9oinj06nsdtskxhs3i5o
25
24
2015-08-14T19:55:08Z
Jyeatman
1
wikitext
text/x-wiki
__TOC__
We collect a high resolution T1-weighted image on every subject, and use this image to define the coordinate space for all subsequent analyses. This section describes the processing steps for a subject's T1-weighted anatomy and should be performed before analyzing the rest of their MRI data.
==AC-PC Alignment==
Data can come off the scanner with arbitrary header information and the subject might not be properly position. So for each subject we start by defining a coordinate frame where 0,0,0 is at the anterior commissure, the anterior and posterior commissure are in the same X and Z planes, and the mid-line is centered in the image. Bob Dougherty wrote a nice tool to help with this. See [https://github.com/vistalab/vistasoft/blob/master/mrAnatomy/VolumeUtilities/mrAnatAverageAcpcNifti.m mrAnatAverageAcpcNifti]. The subject's T1-weighted image should be ac-pc aligned, resliced (preserving its resolution), and saved in the subject's anatomy directory.
parrec2nii
mrAnatAverageAcpcNifti
==Freesurfer Segmentation==
nb9t3kj3r4n0pmczbtcitr4i5whqdd8
26
25
2015-08-14T19:55:59Z
Jyeatman
1
/* AC-PC Alignment */
wikitext
text/x-wiki
__TOC__
We collect a high resolution T1-weighted image on every subject, and use this image to define the coordinate space for all subsequent analyses. This section describes the processing steps for a subject's T1-weighted anatomy and should be performed before analyzing the rest of their MRI data.
==AC-PC Alignment==
Data can come off the scanner with arbitrary header information and the subject might not be properly position. So for each subject we start by defining a coordinate frame where 0,0,0 is at the anterior commissure, the anterior and posterior commissure are in the same X and Z planes, and the mid-line is centered in the image. Bob Dougherty wrote a nice tool to help with this. See [https://github.com/vistalab/vistasoft/blob/master/mrAnatomy/VolumeUtilities/mrAnatAverageAcpcNifti.m mrAnatAverageAcpcNifti]. The subject's T1-weighted image should be ac-pc aligned, resliced (preserving its resolution), and saved in the subject's anatomy directory.
parrec2nii
mrAnatAverageAcpcNifti
==Freesurfer Segmentation==
ti3buge3qpwocawdxnz51l8iu47sn4a
27
26
2015-08-14T20:00:21Z
Jyeatman
1
/* AC-PC Alignment */
wikitext
text/x-wiki
__TOC__
We collect a high resolution T1-weighted image on every subject, and use this image to define the coordinate space for all subsequent analyses. This section describes the processing steps for a subject's T1-weighted anatomy and should be performed before analyzing the rest of their MRI data.
==AC-PC Aligned Nifti Image==
Data can come off the scanner with arbitrary header information and in parrec format. So for each subject we start by defining a coordinate frame where 0,0,0 is at the anterior commissure, the anterior and posterior commissure are in the same X and Z planes, and the mid-line is centered in the image. Bob Dougherty wrote a nice tool to help with this. See [https://github.com/vistalab/vistasoft/blob/master/mrAnatomy/VolumeUtilities/mrAnatAverageAcpcNifti.m mrAnatAverageAcpcNifti]. The subject's T1-weighted image should be ac-pc aligned, resliced (preserving its resolution), and saved in the subject's anatomy directory.
parrec2nii -c
mrAnatAverageAcpcNifti
==Freesurfer Segmentation==
pqjhnli2p6ztoyucbttp5ehayxz2kur
28
27
2015-08-17T20:58:31Z
Jyeatman
1
/* AC-PC Aligned Nifti Image */
wikitext
text/x-wiki
__TOC__
We collect a high resolution T1-weighted image on every subject, and use this image to define the coordinate space for all subsequent analyses. This section describes the processing steps for a subject's T1-weighted anatomy and should be performed before analyzing the rest of their MRI data.
==AC-PC Aligned Nifti Image==
Data can come off the scanner with arbitrary header information and in parrec format. So for each subject we start by defining a coordinate frame where 0,0,0 is at the anterior commissure, the anterior and posterior commissure are in the same X and Z planes, and the mid-line is centered in the image. Bob Dougherty wrote a nice tool to help with this. See [https://github.com/vistalab/vistasoft/blob/master/mrAnatomy/VolumeUtilities/mrAnatAverageAcpcNifti.m mrAnatAverageAcpcNifti]. The subject's T1-weighted image should be ac-pc aligned, resliced (preserving its resolution), and saved in the subject's anatomy directory.
cd /home/projects/MRI/[subid]
parrec2nii -c --scaling=fp *.PAR
im = niftiRead('T1path')
mrAnatAverageAcpcNifti({'T1path'}, '/home/projects/anatomy/[subid]/t1_acpc.nii.gz', diag(im.qto_xyz))
==Freesurfer Segmentation==
qofyq6t503s1hpbwktjbh7li3t9w78g
29
28
2015-08-17T21:03:18Z
Jyeatman
1
/* Freesurfer Segmentation */
wikitext
text/x-wiki
__TOC__
We collect a high resolution T1-weighted image on every subject, and use this image to define the coordinate space for all subsequent analyses. This section describes the processing steps for a subject's T1-weighted anatomy and should be performed before analyzing the rest of their MRI data.
==AC-PC Aligned Nifti Image==
Data can come off the scanner with arbitrary header information and in parrec format. So for each subject we start by defining a coordinate frame where 0,0,0 is at the anterior commissure, the anterior and posterior commissure are in the same X and Z planes, and the mid-line is centered in the image. Bob Dougherty wrote a nice tool to help with this. See [https://github.com/vistalab/vistasoft/blob/master/mrAnatomy/VolumeUtilities/mrAnatAverageAcpcNifti.m mrAnatAverageAcpcNifti]. The subject's T1-weighted image should be ac-pc aligned, resliced (preserving its resolution), and saved in the subject's anatomy directory.
cd /home/projects/MRI/[subid]
parrec2nii -c --scaling=fp *.PAR
im = niftiRead('T1path')
mrAnatAverageAcpcNifti({'T1path'}, '/home/projects/anatomy/[subid]/t1_acpc.nii.gz', diag(im.qto_xyz))
==Freesurfer Segmentation==
Freesurfer is a useful tool for segmenting a T1-weighted image and building a cortical mesh. To segment the subject's T1-weighted image using freesurfer from the command line type:
recon-all -i /home/projects/anatomy/[subid]/t1_acpc_.nii.gz -subjid [subid] -all
Or even better use this handy matlab function written by Jon Winawer to run freesurfer and then also build some useful files that we like to use for data visualization such as a high resolution gray/white segmentation.
fs_autosegmentToITK([subid], '/home/projects/anatomy/[subid]/t1_acpc_.nii.gz')
kkjtuyj9ga0z5flx4nfyl9j2x0b8uq5
30
29
2015-08-17T21:09:26Z
Jyeatman
1
wikitext
text/x-wiki
__TOC__
We collect a high resolution T1-weighted image on every subject, and use this image to define the coordinate space for all subsequent analyses. This section describes the processing steps for a subject's T1-weighted anatomy and should be performed before analyzing the rest of their MRI data.
==AC-PC Aligned Nifti Image==
Data can come off the scanner with arbitrary header information and in parrec format. So for each subject we start by defining a coordinate frame where 0,0,0 is at the anterior commissure, the anterior and posterior commissure are in the same X and Z planes, and the mid-line is centered in the image. Bob Dougherty wrote a nice tool to help with this. See [https://github.com/vistalab/vistasoft/blob/master/mrAnatomy/VolumeUtilities/mrAnatAverageAcpcNifti.m mrAnatAverageAcpcNifti]. The subject's T1-weighted image should be ac-pc aligned, resliced (preserving its resolution), and saved in the subject's anatomy directory.
cd /home/projects/MRI/[subid]
parrec2nii -c --scaling=fp *.PAR
im = niftiRead('T1path')
mrAnatAverageAcpcNifti({'T1path'}, '/home/projects/anatomy/[subid]/t1_acpc.nii.gz', diag(im.qto_xyz))
==Freesurfer Segmentation==
Freesurfer is a useful tool for segmenting a T1-weighted image and building a cortical mesh. To segment the subject's T1-weighted image using freesurfer from the command line type:
recon-all -i /home/projects/anatomy/[subid]/t1_acpc_.nii.gz -subjid [subid] -all
Or even better use this handy matlab function written by Jon Winawer to run freesurfer and then also build some useful files that we like to use for data visualization such as a high resolution gray/white segmentation.
fs_autosegmentToITK([subid], '/home/projects/anatomy/[subid]/t1_acpc_.nii.gz')
==Preprocess diffusion data==
If diffusion data was acquired on the subject we want to (a) correct for EPI distortions in the data using FSL's topup tool; (b) correct for subject motion and eddy currents; (c) fit a tensor model and create a dt6.mat file; (d) fit the CSD model with mrtrix; (e) run AFQ to segment the fibers into all the major fiber groups. Jason Yeatman has written a helpful utility to run FSL's topup and eddy functions:
fsl_preprocess(dwi_files, bvecs_file, bvals_file, pe_dir, outdir)
This function is also wrapped within another utility to run this whole pipeline (steps a-e) on a subject
bde_preprocessdiffusion
ih3se70iwhkyb5os1n4nhe2jm12nntu
36
30
2015-09-03T18:50:31Z
Jyeatman
1
/* AC-PC Aligned Nifti Image */
wikitext
text/x-wiki
__TOC__
We collect a high resolution T1-weighted image on every subject, and use this image to define the coordinate space for all subsequent analyses. This section describes the processing steps for a subject's T1-weighted anatomy and should be performed before analyzing the rest of their MRI data.
==AC-PC Aligned Nifti Image==
Data can come off the scanner with arbitrary header information and in parrec format. So for each subject we start by defining a coordinate frame where 0,0,0 is at the anterior commissure, the anterior and posterior commissure are in the same X and Z planes, and the mid-line is centered in the image. Bob Dougherty wrote a nice tool to help with this. See [https://github.com/vistalab/vistasoft/blob/master/mrAnatomy/VolumeUtilities/mrAnatAverageAcpcNifti.m mrAnatAverageAcpcNifti]. The subject's T1-weighted image should be ac-pc aligned, resliced (preserving its resolution), and saved in the subject's anatomy directory.
cd /home/projects/MRI/[subid]
parrec2nii -c -b --scaling=fp *.PAR
im = niftiRead('T1path')
mrAnatAverageAcpcNifti({'T1path'}, '/home/projects/anatomy/[subid]/t1_acpc.nii.gz', diag(im.qto_xyz))
==Freesurfer Segmentation==
Freesurfer is a useful tool for segmenting a T1-weighted image and building a cortical mesh. To segment the subject's T1-weighted image using freesurfer from the command line type:
recon-all -i /home/projects/anatomy/[subid]/t1_acpc_.nii.gz -subjid [subid] -all
Or even better use this handy matlab function written by Jon Winawer to run freesurfer and then also build some useful files that we like to use for data visualization such as a high resolution gray/white segmentation.
fs_autosegmentToITK([subid], '/home/projects/anatomy/[subid]/t1_acpc_.nii.gz')
==Preprocess diffusion data==
If diffusion data was acquired on the subject we want to (a) correct for EPI distortions in the data using FSL's topup tool; (b) correct for subject motion and eddy currents; (c) fit a tensor model and create a dt6.mat file; (d) fit the CSD model with mrtrix; (e) run AFQ to segment the fibers into all the major fiber groups. Jason Yeatman has written a helpful utility to run FSL's topup and eddy functions:
fsl_preprocess(dwi_files, bvecs_file, bvals_file, pe_dir, outdir)
This function is also wrapped within another utility to run this whole pipeline (steps a-e) on a subject
bde_preprocessdiffusion
s5uwbheou5aamlm0qblewj5lk89lsno
37
36
2015-09-03T18:57:17Z
Jyeatman
1
/* AC-PC Aligned Nifti Image */
wikitext
text/x-wiki
__TOC__
We collect a high resolution T1-weighted image on every subject, and use this image to define the coordinate space for all subsequent analyses. This section describes the processing steps for a subject's T1-weighted anatomy and should be performed before analyzing the rest of their MRI data.
==AC-PC Aligned Nifti Image==
Data can come off the scanner with arbitrary header information and in parrec format. So for each subject we start by defining a coordinate frame where 0,0,0 is at the anterior commissure, the anterior and posterior commissure are in the same X and Z planes, and the mid-line is centered in the image. Bob Dougherty wrote a nice tool to help with this. See [https://github.com/vistalab/vistasoft/blob/master/mrAnatomy/VolumeUtilities/mrAnatAverageAcpcNifti.m mrAnatAverageAcpcNifti]. The subject's T1-weighted image should be ac-pc aligned, resliced (preserving its resolution), and saved in the subject's anatomy directory.
cd /home/projects/MRI/[subid]
parrec2nii -c -b --scaling=fp *.PAR
T1path = mri_rms('T1path')
im = niftiRead(T1path)
mrAnatAverageAcpcNifti({'T1path'}, '/home/projects/anatomy/[subid]/t1_acpc.nii.gz', diag(im.qto_xyz))
==Freesurfer Segmentation==
Freesurfer is a useful tool for segmenting a T1-weighted image and building a cortical mesh. To segment the subject's T1-weighted image using freesurfer from the command line type:
recon-all -i /home/projects/anatomy/[subid]/t1_acpc_.nii.gz -subjid [subid] -all
Or even better use this handy matlab function written by Jon Winawer to run freesurfer and then also build some useful files that we like to use for data visualization such as a high resolution gray/white segmentation.
fs_autosegmentToITK([subid], '/home/projects/anatomy/[subid]/t1_acpc_.nii.gz')
==Preprocess diffusion data==
If diffusion data was acquired on the subject we want to (a) correct for EPI distortions in the data using FSL's topup tool; (b) correct for subject motion and eddy currents; (c) fit a tensor model and create a dt6.mat file; (d) fit the CSD model with mrtrix; (e) run AFQ to segment the fibers into all the major fiber groups. Jason Yeatman has written a helpful utility to run FSL's topup and eddy functions:
fsl_preprocess(dwi_files, bvecs_file, bvals_file, pe_dir, outdir)
This function is also wrapped within another utility to run this whole pipeline (steps a-e) on a subject
bde_preprocessdiffusion
jfxsu23c21hqh1qmt2lil6f46e49ek4
38
37
2015-09-03T19:02:14Z
Jyeatman
1
/* AC-PC Aligned Nifti Image */
wikitext
text/x-wiki
__TOC__
We collect a high resolution T1-weighted image on every subject, and use this image to define the coordinate space for all subsequent analyses. This section describes the processing steps for a subject's T1-weighted anatomy and should be performed before analyzing the rest of their MRI data.
==AC-PC Aligned Nifti Image==
Data can come off the scanner with arbitrary header information and in parrec format. So for each subject we start by defining a coordinate frame where 0,0,0 is at the anterior commissure, the anterior and posterior commissure are in the same X and Z planes, and the mid-line is centered in the image. Bob Dougherty wrote a nice tool to help with this. See [https://github.com/vistalab/vistasoft/blob/master/mrAnatomy/VolumeUtilities/mrAnatAverageAcpcNifti.m mrAnatAverageAcpcNifti]. The subject's T1-weighted image should be ac-pc aligned, resliced (preserving its resolution), and saved in the subject's anatomy directory.
cd /home/projects/MRI/[subid]
parrec2nii -c -b --scaling=fp *.PAR
T1path = mri_rms('T1path')
im = niftiRead(T1path)
mrAnatAverageAcpcNifti({'T1path'}, '/home/projects/anatomy/[subid]/t1_acpc.nii.gz', [], diag(im.qto_xyz))
==Freesurfer Segmentation==
Freesurfer is a useful tool for segmenting a T1-weighted image and building a cortical mesh. To segment the subject's T1-weighted image using freesurfer from the command line type:
recon-all -i /home/projects/anatomy/[subid]/t1_acpc_.nii.gz -subjid [subid] -all
Or even better use this handy matlab function written by Jon Winawer to run freesurfer and then also build some useful files that we like to use for data visualization such as a high resolution gray/white segmentation.
fs_autosegmentToITK([subid], '/home/projects/anatomy/[subid]/t1_acpc_.nii.gz')
==Preprocess diffusion data==
If diffusion data was acquired on the subject we want to (a) correct for EPI distortions in the data using FSL's topup tool; (b) correct for subject motion and eddy currents; (c) fit a tensor model and create a dt6.mat file; (d) fit the CSD model with mrtrix; (e) run AFQ to segment the fibers into all the major fiber groups. Jason Yeatman has written a helpful utility to run FSL's topup and eddy functions:
fsl_preprocess(dwi_files, bvecs_file, bvals_file, pe_dir, outdir)
This function is also wrapped within another utility to run this whole pipeline (steps a-e) on a subject
bde_preprocessdiffusion
rdqjayl7ecd2g5icm8dfd5ie2fe5v63
39
38
2015-09-03T19:14:57Z
Jyeatman
1
/* AC-PC Aligned Nifti Image */
wikitext
text/x-wiki
__TOC__
We collect a high resolution T1-weighted image on every subject, and use this image to define the coordinate space for all subsequent analyses. This section describes the processing steps for a subject's T1-weighted anatomy and should be performed before analyzing the rest of their MRI data.
==AC-PC Aligned Nifti Image==
Data can come off the scanner with arbitrary header information and in parrec format. So for each subject we start by defining a coordinate frame where 0,0,0 is at the anterior commissure, the anterior and posterior commissure are in the same X and Z planes, and the mid-line is centered in the image. Bob Dougherty wrote a nice tool to help with this. See [https://github.com/vistalab/vistasoft/blob/master/mrAnatomy/VolumeUtilities/mrAnatAverageAcpcNifti.m mrAnatAverageAcpcNifti]. The subject's T1-weighted image should be ac-pc aligned, resliced (preserving its resolution), and saved in the subject's anatomy directory.
Step 1: In a terminal, convert the PAR/REC files to nifti images
cd /home/projects/MRI/[subid]
parrec2nii -c -b --scaling=fp *.PAR
Step 2: In MATLAB compute the root mean squared (RMS) image and then ac-pc align
T1path = 'Path to t1 weighted image';
T1path = mri_rms(T1path); % Root mean squared image
im = niftiRead(T1path); % Read root mean squared image
voxres = diag(im.qto_xyz); % Get the voxel resolution of the image (mm)
mrAnatAverageAcpcNifti({T1path}, '/home/projects/anatomy/[subid]/t1_acpc.nii.gz', [], voxres(1:3))
==Freesurfer Segmentation==
Freesurfer is a useful tool for segmenting a T1-weighted image and building a cortical mesh. To segment the subject's T1-weighted image using freesurfer from the command line type:
recon-all -i /home/projects/anatomy/[subid]/t1_acpc_.nii.gz -subjid [subid] -all
Or even better use this handy matlab function written by Jon Winawer to run freesurfer and then also build some useful files that we like to use for data visualization such as a high resolution gray/white segmentation.
fs_autosegmentToITK([subid], '/home/projects/anatomy/[subid]/t1_acpc_.nii.gz')
==Preprocess diffusion data==
If diffusion data was acquired on the subject we want to (a) correct for EPI distortions in the data using FSL's topup tool; (b) correct for subject motion and eddy currents; (c) fit a tensor model and create a dt6.mat file; (d) fit the CSD model with mrtrix; (e) run AFQ to segment the fibers into all the major fiber groups. Jason Yeatman has written a helpful utility to run FSL's topup and eddy functions:
fsl_preprocess(dwi_files, bvecs_file, bvals_file, pe_dir, outdir)
This function is also wrapped within another utility to run this whole pipeline (steps a-e) on a subject
bde_preprocessdiffusion
cp0yippc3d6a9eghh6d4g37t48q37hw
41
39
2015-09-03T19:59:01Z
Jyeatman
1
/* AC-PC Aligned Nifti Image */
wikitext
text/x-wiki
__TOC__
We collect a high resolution T1-weighted image on every subject, and use this image to define the coordinate space for all subsequent analyses. This section describes the processing steps for a subject's T1-weighted anatomy and should be performed before analyzing the rest of their MRI data.
==AC-PC Aligned Nifti Image==
Data can come off the scanner with arbitrary header information and in parrec format. So for each subject we start by defining a coordinate frame where 0,0,0 is at the anterior commissure, the anterior and posterior commissure are in the same X and Z planes, and the mid-line is centered in the image. Bob Dougherty wrote a nice tool to help with this. See [https://github.com/vistalab/vistasoft/blob/master/mrAnatomy/VolumeUtilities/mrAnatAverageAcpcNifti.m mrAnatAverageAcpcNifti]. The subject's T1-weighted image should be ac-pc aligned, resliced (preserving its resolution), and saved in the subject's anatomy directory.
Step 1: In a terminal, convert the PAR/REC files to nifti images
cd /home/projects/MRI/[subid]
parrec2nii -c -b --scaling=fp *.PAR
Step 2: In MATLAB compute the root mean squared (RMS) image and then ac-pc align
T1path = 'Path to t1 weighted image';
T1path = mri_rms(T1path); % Root mean squared image
im = niftiRead(T1path); % Read root mean squared image
voxres = diag(im.qto_xyz); % Get the voxel resolution of the image (mm)
mrAnatAverageAcpcNifti({T1path}, '/home/projects/anatomy/[subid]/t1_acpc.nii.gz', [], voxres(1:3))
[[File:Ac-pc.jpg|200px|thumb|left|ac-pc]]
==Freesurfer Segmentation==
Freesurfer is a useful tool for segmenting a T1-weighted image and building a cortical mesh. To segment the subject's T1-weighted image using freesurfer from the command line type:
recon-all -i /home/projects/anatomy/[subid]/t1_acpc_.nii.gz -subjid [subid] -all
Or even better use this handy matlab function written by Jon Winawer to run freesurfer and then also build some useful files that we like to use for data visualization such as a high resolution gray/white segmentation.
fs_autosegmentToITK([subid], '/home/projects/anatomy/[subid]/t1_acpc_.nii.gz')
==Preprocess diffusion data==
If diffusion data was acquired on the subject we want to (a) correct for EPI distortions in the data using FSL's topup tool; (b) correct for subject motion and eddy currents; (c) fit a tensor model and create a dt6.mat file; (d) fit the CSD model with mrtrix; (e) run AFQ to segment the fibers into all the major fiber groups. Jason Yeatman has written a helpful utility to run FSL's topup and eddy functions:
fsl_preprocess(dwi_files, bvecs_file, bvals_file, pe_dir, outdir)
This function is also wrapped within another utility to run this whole pipeline (steps a-e) on a subject
bde_preprocessdiffusion
3khhoslv7ff0wmifac7mj4c14qlpj0s
42
41
2015-09-03T19:59:24Z
Jyeatman
1
/* AC-PC Aligned Nifti Image */
wikitext
text/x-wiki
__TOC__
We collect a high resolution T1-weighted image on every subject, and use this image to define the coordinate space for all subsequent analyses. This section describes the processing steps for a subject's T1-weighted anatomy and should be performed before analyzing the rest of their MRI data.
==AC-PC Aligned Nifti Image==
Data can come off the scanner with arbitrary header information and in parrec format. So for each subject we start by defining a coordinate frame where 0,0,0 is at the anterior commissure, the anterior and posterior commissure are in the same X and Z planes, and the mid-line is centered in the image. Bob Dougherty wrote a nice tool to help with this. See [https://github.com/vistalab/vistasoft/blob/master/mrAnatomy/VolumeUtilities/mrAnatAverageAcpcNifti.m mrAnatAverageAcpcNifti]. The subject's T1-weighted image should be ac-pc aligned, resliced (preserving its resolution), and saved in the subject's anatomy directory.
Step 1: In a terminal, convert the PAR/REC files to nifti images
cd /home/projects/MRI/[subid]
parrec2nii -c -b --scaling=fp *.PAR
Step 2: In MATLAB compute the root mean squared (RMS) image and then ac-pc align
T1path = 'Path to t1 weighted image';
T1path = mri_rms(T1path); % Root mean squared image
im = niftiRead(T1path); % Read root mean squared image
voxres = diag(im.qto_xyz); % Get the voxel resolution of the image (mm)
mrAnatAverageAcpcNifti({T1path}, '/home/projects/anatomy/[subid]/t1_acpc.nii.gz', [], voxres(1:3))
[[File:Ac-pc.jpg|200px|thumb|center|ac-pc]]
==Freesurfer Segmentation==
Freesurfer is a useful tool for segmenting a T1-weighted image and building a cortical mesh. To segment the subject's T1-weighted image using freesurfer from the command line type:
recon-all -i /home/projects/anatomy/[subid]/t1_acpc_.nii.gz -subjid [subid] -all
Or even better use this handy matlab function written by Jon Winawer to run freesurfer and then also build some useful files that we like to use for data visualization such as a high resolution gray/white segmentation.
fs_autosegmentToITK([subid], '/home/projects/anatomy/[subid]/t1_acpc_.nii.gz')
==Preprocess diffusion data==
If diffusion data was acquired on the subject we want to (a) correct for EPI distortions in the data using FSL's topup tool; (b) correct for subject motion and eddy currents; (c) fit a tensor model and create a dt6.mat file; (d) fit the CSD model with mrtrix; (e) run AFQ to segment the fibers into all the major fiber groups. Jason Yeatman has written a helpful utility to run FSL's topup and eddy functions:
fsl_preprocess(dwi_files, bvecs_file, bvals_file, pe_dir, outdir)
This function is also wrapped within another utility to run this whole pipeline (steps a-e) on a subject
bde_preprocessdiffusion
lcfiscdrc6dypgkqvhfbjzpxvitaphj
43
42
2015-09-03T23:16:15Z
Jyeatman
1
/* AC-PC Aligned Nifti Image */
wikitext
text/x-wiki
__TOC__
We collect a high resolution T1-weighted image on every subject, and use this image to define the coordinate space for all subsequent analyses. This section describes the processing steps for a subject's T1-weighted anatomy and should be performed before analyzing the rest of their MRI data.
==AC-PC Aligned Nifti Image==
Data can come off the scanner with arbitrary header information and in parrec format. So for each subject we start by defining a coordinate frame where 0,0,0 is at the anterior commissure, the anterior and posterior commissure are in the same X and Z planes, and the mid-line is centered in the image. Bob Dougherty wrote a nice tool to help with this. See [https://github.com/vistalab/vistasoft/blob/master/mrAnatomy/VolumeUtilities/mrAnatAverageAcpcNifti.m mrAnatAverageAcpcNifti]. The subject's T1-weighted image should be ac-pc aligned, resliced (preserving its resolution), and saved in the subject's anatomy directory.
Step 1: In a terminal, convert the PAR/REC files to nifti images
cd /home/projects/MRI/[subid]
parrec2nii -c -b --scaling=fp *.PAR
Step 2: In MATLAB compute the root mean squared (RMS) image and then ac-pc align
T1path = 'Path to t1 weighted image';
T1path = mri_rms(T1path); % Root mean squared image
im = niftiRead(T1path); % Read root mean squared image
voxres = diag(im.qto_xyz); % Get the voxel resolution of the image (mm)
mrAnatAverageAcpcNifti({T1path}, '/home/projects/anatomy/[subid]/t1_acpc.nii.gz', [], voxres(1:3))
[[File:Ac-pc.jpg|400px|thumb|center]]
==Freesurfer Segmentation==
Freesurfer is a useful tool for segmenting a T1-weighted image and building a cortical mesh. To segment the subject's T1-weighted image using freesurfer from the command line type:
recon-all -i /home/projects/anatomy/[subid]/t1_acpc_.nii.gz -subjid [subid] -all
Or even better use this handy matlab function written by Jon Winawer to run freesurfer and then also build some useful files that we like to use for data visualization such as a high resolution gray/white segmentation.
fs_autosegmentToITK([subid], '/home/projects/anatomy/[subid]/t1_acpc_.nii.gz')
==Preprocess diffusion data==
If diffusion data was acquired on the subject we want to (a) correct for EPI distortions in the data using FSL's topup tool; (b) correct for subject motion and eddy currents; (c) fit a tensor model and create a dt6.mat file; (d) fit the CSD model with mrtrix; (e) run AFQ to segment the fibers into all the major fiber groups. Jason Yeatman has written a helpful utility to run FSL's topup and eddy functions:
fsl_preprocess(dwi_files, bvecs_file, bvals_file, pe_dir, outdir)
This function is also wrapped within another utility to run this whole pipeline (steps a-e) on a subject
bde_preprocessdiffusion
idll1gckounruclm2hh8ywmozeidfgj
44
43
2015-09-04T18:08:30Z
Jyeatman
1
/* AC-PC Aligned Nifti Image */
wikitext
text/x-wiki
__TOC__
We collect a high resolution T1-weighted image on every subject, and use this image to define the coordinate space for all subsequent analyses. This section describes the processing steps for a subject's T1-weighted anatomy and should be performed before analyzing the rest of their MRI data.
==AC-PC Aligned Nifti Image==
Data can come off the scanner with arbitrary header information and in parrec format. So for each subject we start by defining a coordinate frame where 0,0,0 is at the anterior commissure, the anterior and posterior commissure are in the same X and Z planes, and the mid-line is centered in the image. Bob Dougherty wrote a nice tool to help with this. See [https://github.com/vistalab/vistasoft/blob/master/mrAnatomy/VolumeUtilities/mrAnatAverageAcpcNifti.m mrAnatAverageAcpcNifti]. The subject's T1-weighted image should be ac-pc aligned, resliced (preserving its resolution), and saved in the subject's anatomy directory.
Step 1: In a terminal, convert the PAR/REC files to nifti images
cd /home/projects/MRI/[subid]
parrec2nii -c -b --scaling=fp *.PAR
Step 2: In MATLAB compute the root mean squared (RMS) image and then ac-pc align
T1path = 'Path to t1 weighted image';
T1path = mri_rms(T1path); % Root mean squared image
im = niftiRead(T1path); % Read root mean squared image
voxres = diag(im.qto_xyz); % Get the voxel resolution of the image (mm)
mrAnatAverageAcpcNifti({T1path}, '/home/projects/anatomy/[subid]/t1_acpc.nii.gz', [], voxres(1:3))
[[File:Ac-pc.jpg|300px|thumb|right]]
==Freesurfer Segmentation==
Freesurfer is a useful tool for segmenting a T1-weighted image and building a cortical mesh. To segment the subject's T1-weighted image using freesurfer from the command line type:
recon-all -i /home/projects/anatomy/[subid]/t1_acpc_.nii.gz -subjid [subid] -all
Or even better use this handy matlab function written by Jon Winawer to run freesurfer and then also build some useful files that we like to use for data visualization such as a high resolution gray/white segmentation.
fs_autosegmentToITK([subid], '/home/projects/anatomy/[subid]/t1_acpc_.nii.gz')
==Preprocess diffusion data==
If diffusion data was acquired on the subject we want to (a) correct for EPI distortions in the data using FSL's topup tool; (b) correct for subject motion and eddy currents; (c) fit a tensor model and create a dt6.mat file; (d) fit the CSD model with mrtrix; (e) run AFQ to segment the fibers into all the major fiber groups. Jason Yeatman has written a helpful utility to run FSL's topup and eddy functions:
fsl_preprocess(dwi_files, bvecs_file, bvals_file, pe_dir, outdir)
This function is also wrapped within another utility to run this whole pipeline (steps a-e) on a subject
bde_preprocessdiffusion
p41gwh719o1j2b7jpd7uakbffno57wx
45
44
2015-09-04T18:08:55Z
Jyeatman
1
/* AC-PC Aligned Nifti Image */
wikitext
text/x-wiki
__TOC__
We collect a high resolution T1-weighted image on every subject, and use this image to define the coordinate space for all subsequent analyses. This section describes the processing steps for a subject's T1-weighted anatomy and should be performed before analyzing the rest of their MRI data.
==AC-PC Aligned Nifti Image==
Data can come off the scanner with arbitrary header information and in parrec format. So for each subject we start by defining a coordinate frame where 0,0,0 is at the anterior commissure, the anterior and posterior commissure are in the same X and Z planes, and the mid-line is centered in the image. Bob Dougherty wrote a nice tool to help with this. See [https://github.com/vistalab/vistasoft/blob/master/mrAnatomy/VolumeUtilities/mrAnatAverageAcpcNifti.m mrAnatAverageAcpcNifti]. The subject's T1-weighted image should be ac-pc aligned, resliced (preserving its resolution), and saved in the subject's anatomy directory.
Step 1: In a terminal, convert the PAR/REC files to nifti images
cd /home/projects/MRI/[subid]
parrec2nii -c -b --scaling=fp *.PAR
Step 2: In MATLAB compute the root mean squared (RMS) image and then ac-pc align
T1path = 'Path to t1 weighted image';
T1path = mri_rms(T1path); % Root mean squared image
im = niftiRead(T1path); % Read root mean squared image
voxres = diag(im.qto_xyz); % Get the voxel resolution of the image (mm)
mrAnatAverageAcpcNifti({T1path}, '/home/projects/anatomy/[subid]/t1_acpc.nii.gz', [], voxres(1:3)) [[File:Ac-pc.jpg|300px|thumb|right]]
==Freesurfer Segmentation==
Freesurfer is a useful tool for segmenting a T1-weighted image and building a cortical mesh. To segment the subject's T1-weighted image using freesurfer from the command line type:
recon-all -i /home/projects/anatomy/[subid]/t1_acpc_.nii.gz -subjid [subid] -all
Or even better use this handy matlab function written by Jon Winawer to run freesurfer and then also build some useful files that we like to use for data visualization such as a high resolution gray/white segmentation.
fs_autosegmentToITK([subid], '/home/projects/anatomy/[subid]/t1_acpc_.nii.gz')
==Preprocess diffusion data==
If diffusion data was acquired on the subject we want to (a) correct for EPI distortions in the data using FSL's topup tool; (b) correct for subject motion and eddy currents; (c) fit a tensor model and create a dt6.mat file; (d) fit the CSD model with mrtrix; (e) run AFQ to segment the fibers into all the major fiber groups. Jason Yeatman has written a helpful utility to run FSL's topup and eddy functions:
fsl_preprocess(dwi_files, bvecs_file, bvals_file, pe_dir, outdir)
This function is also wrapped within another utility to run this whole pipeline (steps a-e) on a subject
bde_preprocessdiffusion
3k7o7xozz2ka7di1yioq0ph3eh9q1r6
46
45
2015-09-04T18:20:13Z
Jyeatman
1
/* AC-PC Aligned Nifti Image */
wikitext
text/x-wiki
__TOC__
We collect a high resolution T1-weighted image on every subject, and use this image to define the coordinate space for all subsequent analyses. This section describes the processing steps for a subject's T1-weighted anatomy and should be performed before analyzing the rest of their MRI data.
==AC-PC Aligned Nifti Image==
Data can come off the scanner with arbitrary header information and in parrec format. So for each subject we start by defining a coordinate frame where 0,0,0 is at the anterior commissure, the anterior and posterior commissure are in the same X and Z planes, and the mid-line is centered in the image. Bob Dougherty wrote a nice tool to help with this. See [https://github.com/vistalab/vistasoft/blob/master/mrAnatomy/VolumeUtilities/mrAnatAverageAcpcNifti.m mrAnatAverageAcpcNifti]. The subject's T1-weighted image should be ac-pc aligned, resliced (preserving its resolution), and saved in the subject's anatomy directory.
Step 1: In a terminal, convert the PAR/REC files to nifti images
cd /home/projects/MRI/[subid]
parrec2nii -c -b --scaling=fp *.PAR
Step 2: In MATLAB compute the root mean squared (RMS) image and then ac-pc align
T1path = 'Path to t1 weighted image';
T1path = mri_rms(T1path); % Root mean squared image [[File:Ac-pc.jpg|300px|thumb|right]]
im = niftiRead(T1path); % Read root mean squared image
voxres = diag(im.qto_xyz); % Get the voxel resolution of the image (mm)
mrAnatAverageAcpcNifti({T1path}, '/home/projects/anatomy/[subid]/t1_acpc.nii.gz', [], voxres(1:3))
==Freesurfer Segmentation==
Freesurfer is a useful tool for segmenting a T1-weighted image and building a cortical mesh. To segment the subject's T1-weighted image using freesurfer from the command line type:
recon-all -i /home/projects/anatomy/[subid]/t1_acpc_.nii.gz -subjid [subid] -all
Or even better use this handy matlab function written by Jon Winawer to run freesurfer and then also build some useful files that we like to use for data visualization such as a high resolution gray/white segmentation.
fs_autosegmentToITK([subid], '/home/projects/anatomy/[subid]/t1_acpc_.nii.gz')
==Preprocess diffusion data==
If diffusion data was acquired on the subject we want to (a) correct for EPI distortions in the data using FSL's topup tool; (b) correct for subject motion and eddy currents; (c) fit a tensor model and create a dt6.mat file; (d) fit the CSD model with mrtrix; (e) run AFQ to segment the fibers into all the major fiber groups. Jason Yeatman has written a helpful utility to run FSL's topup and eddy functions:
fsl_preprocess(dwi_files, bvecs_file, bvals_file, pe_dir, outdir)
This function is also wrapped within another utility to run this whole pipeline (steps a-e) on a subject
bde_preprocessdiffusion
kg2nmckaz20z9o5sqo1jsr4tavcudgu
47
46
2015-09-04T18:23:37Z
Jyeatman
1
/* AC-PC Aligned Nifti Image */
wikitext
text/x-wiki
__TOC__
We collect a high resolution T1-weighted image on every subject, and use this image to define the coordinate space for all subsequent analyses. This section describes the processing steps for a subject's T1-weighted anatomy and should be performed before analyzing the rest of their MRI data.
==AC-PC Aligned Nifti Image==
Data can come off the scanner with arbitrary header information and in parrec format. So for each subject we start by defining a coordinate frame where 0,0,0 is at the anterior commissure, the anterior and posterior commissure are in the same X and Z planes, and the mid-line is centered in the image. Bob Dougherty wrote a nice tool to help with this. See [https://github.com/vistalab/vistasoft/blob/master/mrAnatomy/VolumeUtilities/mrAnatAverageAcpcNifti.m mrAnatAverageAcpcNifti]. The subject's T1-weighted image should be ac-pc aligned, resliced (preserving its resolution), and saved in the subject's anatomy directory. [[File:Ac-pc.jpg|300px|thumb|right|bottom]]
Step 1: In a terminal, convert the PAR/REC files to nifti images
cd /home/projects/MRI/[subid]
parrec2nii -c -b --scaling=fp *.PAR
Step 2: In MATLAB compute the root mean squared (RMS) image and then ac-pc align
T1path = 'Path to t1 weighted image';
T1path = mri_rms(T1path); % Root mean squared image
im = niftiRead(T1path); % Read root mean squared image
voxres = diag(im.qto_xyz); % Get the voxel resolution of the image (mm)
mrAnatAverageAcpcNifti({T1path}, '/home/projects/anatomy/[subid]/t1_acpc.nii.gz', [], voxres(1:3))
==Freesurfer Segmentation==
Freesurfer is a useful tool for segmenting a T1-weighted image and building a cortical mesh. To segment the subject's T1-weighted image using freesurfer from the command line type:
recon-all -i /home/projects/anatomy/[subid]/t1_acpc_.nii.gz -subjid [subid] -all
Or even better use this handy matlab function written by Jon Winawer to run freesurfer and then also build some useful files that we like to use for data visualization such as a high resolution gray/white segmentation.
fs_autosegmentToITK([subid], '/home/projects/anatomy/[subid]/t1_acpc_.nii.gz')
==Preprocess diffusion data==
If diffusion data was acquired on the subject we want to (a) correct for EPI distortions in the data using FSL's topup tool; (b) correct for subject motion and eddy currents; (c) fit a tensor model and create a dt6.mat file; (d) fit the CSD model with mrtrix; (e) run AFQ to segment the fibers into all the major fiber groups. Jason Yeatman has written a helpful utility to run FSL's topup and eddy functions:
fsl_preprocess(dwi_files, bvecs_file, bvals_file, pe_dir, outdir)
This function is also wrapped within another utility to run this whole pipeline (steps a-e) on a subject
bde_preprocessdiffusion
dpqc1t3zbduno47yutqffxtjoz3mro2
48
47
2015-09-04T18:24:27Z
Jyeatman
1
/* AC-PC Aligned Nifti Image */
wikitext
text/x-wiki
__TOC__
We collect a high resolution T1-weighted image on every subject, and use this image to define the coordinate space for all subsequent analyses. This section describes the processing steps for a subject's T1-weighted anatomy and should be performed before analyzing the rest of their MRI data.
==AC-PC Aligned Nifti Image==
Data can come off the scanner with arbitrary header information and in parrec format. So for each subject we start by defining a coordinate frame where 0,0,0 is at the anterior commissure, the anterior and posterior commissure are in the same X and Z planes, and the mid-line is centered in the image. Bob Dougherty wrote a nice tool to help with this. See [https://github.com/vistalab/vistasoft/blob/master/mrAnatomy/VolumeUtilities/mrAnatAverageAcpcNifti.m mrAnatAverageAcpcNifti]. The subject's T1-weighted image should be ac-pc aligned, resliced (preserving its resolution), and saved in the subject's anatomy directory. [[File:Ac-pc.jpg|300px|thumb|right|text-bottom]]
Step 1: In a terminal, convert the PAR/REC files to nifti images
cd /home/projects/MRI/[subid]
parrec2nii -c -b --scaling=fp *.PAR
Step 2: In MATLAB compute the root mean squared (RMS) image and then ac-pc align
T1path = 'Path to t1 weighted image';
T1path = mri_rms(T1path); % Root mean squared image
im = niftiRead(T1path); % Read root mean squared image
voxres = diag(im.qto_xyz); % Get the voxel resolution of the image (mm)
mrAnatAverageAcpcNifti({T1path}, '/home/projects/anatomy/[subid]/t1_acpc.nii.gz', [], voxres(1:3))
==Freesurfer Segmentation==
Freesurfer is a useful tool for segmenting a T1-weighted image and building a cortical mesh. To segment the subject's T1-weighted image using freesurfer from the command line type:
recon-all -i /home/projects/anatomy/[subid]/t1_acpc_.nii.gz -subjid [subid] -all
Or even better use this handy matlab function written by Jon Winawer to run freesurfer and then also build some useful files that we like to use for data visualization such as a high resolution gray/white segmentation.
fs_autosegmentToITK([subid], '/home/projects/anatomy/[subid]/t1_acpc_.nii.gz')
==Preprocess diffusion data==
If diffusion data was acquired on the subject we want to (a) correct for EPI distortions in the data using FSL's topup tool; (b) correct for subject motion and eddy currents; (c) fit a tensor model and create a dt6.mat file; (d) fit the CSD model with mrtrix; (e) run AFQ to segment the fibers into all the major fiber groups. Jason Yeatman has written a helpful utility to run FSL's topup and eddy functions:
fsl_preprocess(dwi_files, bvecs_file, bvals_file, pe_dir, outdir)
This function is also wrapped within another utility to run this whole pipeline (steps a-e) on a subject
bde_preprocessdiffusion
fmbenyq8vbowg6xoh7a024pu9ymz32g
49
48
2015-09-04T18:25:51Z
Jyeatman
1
/* AC-PC Aligned Nifti Image */
wikitext
text/x-wiki
__TOC__
We collect a high resolution T1-weighted image on every subject, and use this image to define the coordinate space for all subsequent analyses. This section describes the processing steps for a subject's T1-weighted anatomy and should be performed before analyzing the rest of their MRI data.
==AC-PC Aligned Nifti Image==
Data can come off the scanner with arbitrary header information and in parrec format. So for each subject we start by defining a coordinate frame where 0,0,0 is at the anterior commissure, the anterior and posterior commissure are in the same X and Z planes, and the mid-line is centered in the image. Bob Dougherty wrote a nice tool to help with this. See [https://github.com/vistalab/vistasoft/blob/master/mrAnatomy/VolumeUtilities/mrAnatAverageAcpcNifti.m mrAnatAverageAcpcNifti]. The subject's T1-weighted image should be ac-pc aligned, resliced (preserving its resolution), and saved in the subject's anatomy directory. [[File:Ac-pc.jpg|300px|right|text-bottom]]
Step 1: In a terminal, convert the PAR/REC files to nifti images
cd /home/projects/MRI/[subid]
parrec2nii -c -b --scaling=fp *.PAR
Step 2: In MATLAB compute the root mean squared (RMS) image and then ac-pc align
T1path = 'Path to t1 weighted image';
T1path = mri_rms(T1path); % Root mean squared image
im = niftiRead(T1path); % Read root mean squared image
voxres = diag(im.qto_xyz); % Get the voxel resolution of the image (mm)
mrAnatAverageAcpcNifti({T1path}, '/home/projects/anatomy/[subid]/t1_acpc.nii.gz', [], voxres(1:3))
==Freesurfer Segmentation==
Freesurfer is a useful tool for segmenting a T1-weighted image and building a cortical mesh. To segment the subject's T1-weighted image using freesurfer from the command line type:
recon-all -i /home/projects/anatomy/[subid]/t1_acpc_.nii.gz -subjid [subid] -all
Or even better use this handy matlab function written by Jon Winawer to run freesurfer and then also build some useful files that we like to use for data visualization such as a high resolution gray/white segmentation.
fs_autosegmentToITK([subid], '/home/projects/anatomy/[subid]/t1_acpc_.nii.gz')
==Preprocess diffusion data==
If diffusion data was acquired on the subject we want to (a) correct for EPI distortions in the data using FSL's topup tool; (b) correct for subject motion and eddy currents; (c) fit a tensor model and create a dt6.mat file; (d) fit the CSD model with mrtrix; (e) run AFQ to segment the fibers into all the major fiber groups. Jason Yeatman has written a helpful utility to run FSL's topup and eddy functions:
fsl_preprocess(dwi_files, bvecs_file, bvals_file, pe_dir, outdir)
This function is also wrapped within another utility to run this whole pipeline (steps a-e) on a subject
bde_preprocessdiffusion
cayonb9jvm9jh5az1crktuzm3m0u0cm
50
49
2015-09-04T18:27:45Z
Jyeatman
1
/* AC-PC Aligned Nifti Image */
wikitext
text/x-wiki
__TOC__
We collect a high resolution T1-weighted image on every subject, and use this image to define the coordinate space for all subsequent analyses. This section describes the processing steps for a subject's T1-weighted anatomy and should be performed before analyzing the rest of their MRI data.
==AC-PC Aligned Nifti Image==
[[File:Ac-pc.jpg|300px|right]]
Data can come off the scanner with arbitrary header information and in parrec format. So for each subject we start by defining a coordinate frame where 0,0,0 is at the anterior commissure, the anterior and posterior commissure are in the same X and Z planes, and the mid-line is centered in the image. Bob Dougherty wrote a nice tool to help with this. See [https://github.com/vistalab/vistasoft/blob/master/mrAnatomy/VolumeUtilities/mrAnatAverageAcpcNifti.m mrAnatAverageAcpcNifti]. The subject's T1-weighted image should be ac-pc aligned, resliced (preserving its resolution), and saved in the subject's anatomy directory.
Step 1: In a terminal, convert the PAR/REC files to nifti images
cd /home/projects/MRI/[subid]
parrec2nii -c -b --scaling=fp *.PAR
Step 2: In MATLAB compute the root mean squared (RMS) image and then ac-pc align
T1path = 'Path to t1 weighted image';
T1path = mri_rms(T1path); % Root mean squared image
im = niftiRead(T1path); % Read root mean squared image
voxres = diag(im.qto_xyz); % Get the voxel resolution of the image (mm)
mrAnatAverageAcpcNifti({T1path}, '/home/projects/anatomy/[subid]/t1_acpc.nii.gz', [], voxres(1:3))
==Freesurfer Segmentation==
Freesurfer is a useful tool for segmenting a T1-weighted image and building a cortical mesh. To segment the subject's T1-weighted image using freesurfer from the command line type:
recon-all -i /home/projects/anatomy/[subid]/t1_acpc_.nii.gz -subjid [subid] -all
Or even better use this handy matlab function written by Jon Winawer to run freesurfer and then also build some useful files that we like to use for data visualization such as a high resolution gray/white segmentation.
fs_autosegmentToITK([subid], '/home/projects/anatomy/[subid]/t1_acpc_.nii.gz')
==Preprocess diffusion data==
If diffusion data was acquired on the subject we want to (a) correct for EPI distortions in the data using FSL's topup tool; (b) correct for subject motion and eddy currents; (c) fit a tensor model and create a dt6.mat file; (d) fit the CSD model with mrtrix; (e) run AFQ to segment the fibers into all the major fiber groups. Jason Yeatman has written a helpful utility to run FSL's topup and eddy functions:
fsl_preprocess(dwi_files, bvecs_file, bvals_file, pe_dir, outdir)
This function is also wrapped within another utility to run this whole pipeline (steps a-e) on a subject
bde_preprocessdiffusion
k4ucri693zaug9alvp4f8rjn4728mho
51
50
2015-09-30T23:22:19Z
Jyeatman
1
wikitext
text/x-wiki
__TOC__
We collect a high resolution T1-weighted image on every subject, and use this image to define the coordinate space for all subsequent analyses. This section describes the processing steps for a subject's T1-weighted anatomy and should be performed before analyzing the rest of their MRI data.
==AC-PC Aligned Nifti Image==
[[File:Ac-pc.jpg|300px|right]]
Data can come off the scanner with arbitrary header information and in parrec format. So for each subject we start by defining a coordinate frame where 0,0,0 is at the anterior commissure, the anterior and posterior commissure are in the same X and Z planes, and the mid-line is centered in the image. Bob Dougherty wrote a nice tool to help with this. See [https://github.com/vistalab/vistasoft/blob/master/mrAnatomy/VolumeUtilities/mrAnatAverageAcpcNifti.m mrAnatAverageAcpcNifti]. The subject's T1-weighted image should be ac-pc aligned, resliced (preserving its resolution), and saved in the subject's anatomy directory.
Step 1: In a terminal, convert the PAR/REC files to nifti images
cd /home/projects/MRI/[subid]
parrec2nii -c -b --scaling=fp *.PAR
Step 2: In MATLAB compute the root mean squared (RMS) image and then ac-pc align
T1path = 'Path to t1 weighted image';
T1path = mri_rms(T1path); % Root mean squared image
im = niftiRead(T1path); % Read root mean squared image
voxres = diag(im.qto_xyz); % Get the voxel resolution of the image (mm)
mrAnatAverageAcpcNifti({T1path}, '/home/projects/anatomy/[subid]/t1_acpc.nii.gz', [], voxres(1:3))
==Freesurfer Segmentation==
Freesurfer is a useful tool for segmenting a T1-weighted image and building a cortical mesh. To segment the subject's T1-weighted image using freesurfer from the command line type:
recon-all -i /home/projects/anatomy/[subid]/t1_acpc_.nii.gz -subjid [subid] -all
Or even better use this handy matlab function written by Jon Winawer to run freesurfer and then also build some useful files that we like to use for data visualization such as a high resolution gray/white segmentation.
fs_autosegmentToITK([subid], '/home/projects/anatomy/[subid]/t1_acpc_.nii.gz')
kv2g8d41fbfljm7ehyj2db3nmkerecf
53
51
2015-10-01T19:09:12Z
Jyeatman
1
/* AC-PC Aligned Nifti Image */
wikitext
text/x-wiki
__TOC__
We collect a high resolution T1-weighted image on every subject, and use this image to define the coordinate space for all subsequent analyses. This section describes the processing steps for a subject's T1-weighted anatomy and should be performed before analyzing the rest of their MRI data.
==AC-PC Aligned Nifti Image==
[[File:Ac-pc.jpg|300px|right]]
Data can come off the scanner with arbitrary header information and in parrec format. So for each subject we start by defining a coordinate frame where 0,0,0 is at the anterior commissure, the anterior and posterior commissure are in the same X and Z planes, and the mid-line is centered in the image. Bob Dougherty wrote a nice tool to help with this. See [https://github.com/vistalab/vistasoft/blob/master/mrAnatomy/VolumeUtilities/mrAnatAverageAcpcNifti.m mrAnatAverageAcpcNifti]. The subject's T1-weighted image should be ac-pc aligned, resliced (preserving its resolution), and saved in the subject's anatomy directory.
Step 1: In a terminal, convert the PAR/REC files to nifti images
cd /home/projects/MRI/[subid]
parrec2nii -c -b --scaling=fp *.PAR
Step 2: In MATLAB compute the root mean squared (RMS) image and then ac-pc align
T1path = 'Path to t1 weighted image';
T1path = mri_rms(T1path); % Root mean squared image
im = niftiRead(T1path); % Read root mean squared image
voxres = diag(im.qto_xyz)'; % Get the voxel resolution of the image (mm)
mrAnatAverageAcpcNifti({T1path}, '/home/projects/anatomy/[subid]/t1_acpc.nii.gz', [], voxres(1:3))
==Freesurfer Segmentation==
Freesurfer is a useful tool for segmenting a T1-weighted image and building a cortical mesh. To segment the subject's T1-weighted image using freesurfer from the command line type:
recon-all -i /home/projects/anatomy/[subid]/t1_acpc_.nii.gz -subjid [subid] -all
Or even better use this handy matlab function written by Jon Winawer to run freesurfer and then also build some useful files that we like to use for data visualization such as a high resolution gray/white segmentation.
fs_autosegmentToITK([subid], '/home/projects/anatomy/[subid]/t1_acpc_.nii.gz')
mmmukn9h4hnb3ujv06bm21qrtjm3m2w
107
53
2015-11-02T21:10:24Z
Jyeatman
1
/* AC-PC Aligned Nifti Image */
wikitext
text/x-wiki
__TOC__
We collect a high resolution T1-weighted image on every subject, and use this image to define the coordinate space for all subsequent analyses. This section describes the processing steps for a subject's T1-weighted anatomy and should be performed before analyzing the rest of their MRI data.
==AC-PC Aligned Nifti Image==
[[File:Ac-pc.jpg|300px|right]]
Data can come off the scanner with arbitrary header information and in parrec format. So for each subject we start by defining a coordinate frame where 0,0,0 is at the anterior commissure, the anterior and posterior commissure are in the same X and Z planes, and the mid-line is centered in the image. Bob Dougherty wrote a nice tool to help with this. See [https://github.com/vistalab/vistasoft/blob/master/mrAnatomy/VolumeUtilities/mrAnatAverageAcpcNifti.m mrAnatAverageAcpcNifti]. The subject's T1-weighted image should be ac-pc aligned, resliced (preserving its resolution), and saved in the subject's anatomy directory.
Step 1: In a terminal, convert the PAR/REC files to nifti images
cd /home/projects/MRI/[subid]
parrec2nii -c -b *.PAR
Step 2: In MATLAB compute the root mean squared (RMS) image and then ac-pc align
T1path = 'Path to t1 weighted image';
T1path = mri_rms(T1path); % Root mean squared image
im = niftiRead(T1path); % Read root mean squared image
voxres = diag(im.qto_xyz)'; % Get the voxel resolution of the image (mm)
mrAnatAverageAcpcNifti({T1path}, '/home/projects/anatomy/[subid]/t1_acpc.nii.gz', [], voxres(1:3))
==Freesurfer Segmentation==
Freesurfer is a useful tool for segmenting a T1-weighted image and building a cortical mesh. To segment the subject's T1-weighted image using freesurfer from the command line type:
recon-all -i /home/projects/anatomy/[subid]/t1_acpc_.nii.gz -subjid [subid] -all
Or even better use this handy matlab function written by Jon Winawer to run freesurfer and then also build some useful files that we like to use for data visualization such as a high resolution gray/white segmentation.
fs_autosegmentToITK([subid], '/home/projects/anatomy/[subid]/t1_acpc_.nii.gz')
q2mbvhrs6vm2zk6nrcfvby15nj9d384
108
107
2015-11-02T21:20:19Z
Jyeatman
1
/* Freesurfer Segmentation */
wikitext
text/x-wiki
__TOC__
We collect a high resolution T1-weighted image on every subject, and use this image to define the coordinate space for all subsequent analyses. This section describes the processing steps for a subject's T1-weighted anatomy and should be performed before analyzing the rest of their MRI data.
==AC-PC Aligned Nifti Image==
[[File:Ac-pc.jpg|300px|right]]
Data can come off the scanner with arbitrary header information and in parrec format. So for each subject we start by defining a coordinate frame where 0,0,0 is at the anterior commissure, the anterior and posterior commissure are in the same X and Z planes, and the mid-line is centered in the image. Bob Dougherty wrote a nice tool to help with this. See [https://github.com/vistalab/vistasoft/blob/master/mrAnatomy/VolumeUtilities/mrAnatAverageAcpcNifti.m mrAnatAverageAcpcNifti]. The subject's T1-weighted image should be ac-pc aligned, resliced (preserving its resolution), and saved in the subject's anatomy directory.
Step 1: In a terminal, convert the PAR/REC files to nifti images
cd /home/projects/MRI/[subid]
parrec2nii -c -b *.PAR
Step 2: In MATLAB compute the root mean squared (RMS) image and then ac-pc align
T1path = 'Path to t1 weighted image';
T1path = mri_rms(T1path); % Root mean squared image
im = niftiRead(T1path); % Read root mean squared image
voxres = diag(im.qto_xyz)'; % Get the voxel resolution of the image (mm)
mrAnatAverageAcpcNifti({T1path}, '/home/projects/anatomy/[subid]/t1_acpc.nii.gz', [], voxres(1:3))
==Freesurfer Segmentation==
Freesurfer is a useful tool for segmenting a T1-weighted image and building a cortical mesh. To segment the subject's T1-weighted image using freesurfer from the command line type:
recon-all -i /home/projects/anatomy/[subid]/t1_acpc_.nii.gz -subjid [subid] -all
Or even better use this handy matlab function written by Jon Winawer to run freesurfer and then also build some useful files that we like to use for data visualization such as a high resolution gray/white segmentation.
fs_autosegmentToITK([subid], '/home/projects/anatomy/[subid]/t1_acpc.nii.gz')
3d78migyjvneadelkunhghw5tjn0d6g
109
108
2015-11-02T21:24:05Z
Jyeatman
1
/* AC-PC Aligned Nifti Image */
wikitext
text/x-wiki
__TOC__
We collect a high resolution T1-weighted image on every subject, and use this image to define the coordinate space for all subsequent analyses. This section describes the processing steps for a subject's T1-weighted anatomy and should be performed before analyzing the rest of their MRI data.
==AC-PC Aligned Nifti Image==
[[File:Ac-pc.jpg|300px|right]]
Data can come off the scanner with arbitrary header information and in parrec format. So for each subject we start by defining a coordinate frame where 0,0,0 is at the anterior commissure, the anterior and posterior commissure are in the same X and Z planes, and the mid-line is centered in the image. Bob Dougherty wrote a nice tool to help with this. See [https://github.com/vistalab/vistasoft/blob/master/mrAnatomy/VolumeUtilities/mrAnatAverageAcpcNifti.m mrAnatAverageAcpcNifti]. The subject's T1-weighted image should be ac-pc aligned, resliced (preserving its resolution), and saved in the subject's anatomy directory.
Step 1: In a terminal, convert the PAR/REC files to nifti images
cd /home/projects/MRI/[subid]
parrec2nii -c -b *.PAR
Step 2: In MATLAB compute the root mean squared (RMS) image and then ac-pc align. If a subject has multiple images then the rms operation should be run on each image and then a cell-array with paths to all the images can be pushed through mrAnatAverageAcpcNifti resulting in a very nice anatomy
T1path = 'Path to t1 weighted image';
T1path = mri_rms(T1path); % Root mean squared image
im = niftiRead(T1path); % Read root mean squared image
voxres = diag(im.qto_xyz)'; % Get the voxel resolution of the image (mm)
mrAnatAverageAcpcNifti({T1path}, '/home/projects/anatomy/[subid]/t1_acpc.nii.gz', [], voxres(1:3))
==Freesurfer Segmentation==
Freesurfer is a useful tool for segmenting a T1-weighted image and building a cortical mesh. To segment the subject's T1-weighted image using freesurfer from the command line type:
recon-all -i /home/projects/anatomy/[subid]/t1_acpc_.nii.gz -subjid [subid] -all
Or even better use this handy matlab function written by Jon Winawer to run freesurfer and then also build some useful files that we like to use for data visualization such as a high resolution gray/white segmentation.
fs_autosegmentToITK([subid], '/home/projects/anatomy/[subid]/t1_acpc.nii.gz')
osmw2gveyt7sgmwx0unktybh4uu8y34
123
109
2015-11-03T01:02:32Z
Jyeatman
1
/* AC-PC Aligned Nifti Image */
wikitext
text/x-wiki
__TOC__
We collect a high resolution T1-weighted image on every subject, and use this image to define the coordinate space for all subsequent analyses. This section describes the processing steps for a subject's T1-weighted anatomy and should be performed before analyzing the rest of their MRI data.
==AC-PC Aligned Nifti Image==
[[File:Ac-pc.jpg|300px|right]]
Data can come off the scanner with arbitrary header information and in parrec format. So for each subject we start by defining a coordinate frame where 0,0,0 is at the anterior commissure, the anterior and posterior commissure are in the same X and Z planes, and the mid-line is centered in the image. Bob Dougherty wrote a nice tool to help with this. See [https://github.com/vistalab/vistasoft/blob/master/mrAnatomy/VolumeUtilities/mrAnatAverageAcpcNifti.m mrAnatAverageAcpcNifti]. The subject's T1-weighted image should be ac-pc aligned, resliced (preserving its resolution), and saved in the subject's anatomy directory.
Step 1: In a terminal, convert the PAR/REC files to nifti images
cd /mnt/diskArray/projects/MRI/[subid]
parrec2nii -c -b *.PAR
Step 2: In MATLAB compute the root mean squared (RMS) image and then ac-pc align. If a subject has multiple images then the rms operation should be run on each image and then a cell-array with paths to all the images can be pushed through mrAnatAverageAcpcNifti resulting in a very nice anatomy
T1path = 'Path to t1 weighted image';
T1path = mri_rms(T1path); % Root mean squared image
im = niftiRead(T1path); % Read root mean squared image
voxres = diag(im.qto_xyz)'; % Get the voxel resolution of the image (mm)
mrAnatAverageAcpcNifti({T1path}, '/home/projects/anatomy/[subid]/t1_acpc.nii.gz', [], voxres(1:3))
==Freesurfer Segmentation==
Freesurfer is a useful tool for segmenting a T1-weighted image and building a cortical mesh. To segment the subject's T1-weighted image using freesurfer from the command line type:
recon-all -i /home/projects/anatomy/[subid]/t1_acpc_.nii.gz -subjid [subid] -all
Or even better use this handy matlab function written by Jon Winawer to run freesurfer and then also build some useful files that we like to use for data visualization such as a high resolution gray/white segmentation.
fs_autosegmentToITK([subid], '/home/projects/anatomy/[subid]/t1_acpc.nii.gz')
nqbiz2khgggogbwfuxelm48smlrfyxs
129
123
2015-11-06T19:43:38Z
Pdonnelly
2
/* AC-PC Aligned Nifti Image */
wikitext
text/x-wiki
__TOC__
We collect a high resolution T1-weighted image on every subject, and use this image to define the coordinate space for all subsequent analyses. This section describes the processing steps for a subject's T1-weighted anatomy and should be performed before analyzing the rest of their MRI data.
==AC-PC Aligned Nifti Image==
[[File:Ac-pc.jpg|300px|right]]
Data can come off the scanner with arbitrary header information and in parrec format. So for each subject we start by defining a coordinate frame where 0,0,0 is at the anterior commissure, the anterior and posterior commissure are in the same X and Z planes, and the mid-line is centered in the image. Bob Dougherty wrote a nice tool to help with this. See [https://github.com/vistalab/vistasoft/blob/master/mrAnatomy/VolumeUtilities/mrAnatAverageAcpcNifti.m mrAnatAverageAcpcNifti]. The subject's T1-weighted image should be ac-pc aligned, resliced (preserving its resolution), and saved in the subject's anatomy directory.
Step 1: In a terminal, convert the PAR/REC files to nifti images
cd /mnt/diskArray/projects/MRI/[subid]
parrec2nii -c -b *.PAR
Step 2: In MATLAB compute the root mean squared (RMS) image and then ac-pc align. If a subject has multiple images then the rms operation should be run on each image and then a cell-array with paths to all the images can be pushed through mrAnatAverageAcpcNifti resulting in a very nice anatomy
T1path = 'Path to t1 weighted image';
T1path = mri_rms(T1path); % Root mean squared image
im = niftiRead(T1path); % Read root mean squared image
voxres = diag(im.qto_xyz)'; % Get the voxel resolution of the image (mm)
mrAnatAverageAcpcNifti({T1path}, '/mnt/diskArray/projects/anatomy/[subid]/t1_acpc.nii.gz', [], voxres(1:3))
==Freesurfer Segmentation==
Freesurfer is a useful tool for segmenting a T1-weighted image and building a cortical mesh. To segment the subject's T1-weighted image using freesurfer from the command line type:
recon-all -i /home/projects/anatomy/[subid]/t1_acpc_.nii.gz -subjid [subid] -all
Or even better use this handy matlab function written by Jon Winawer to run freesurfer and then also build some useful files that we like to use for data visualization such as a high resolution gray/white segmentation.
fs_autosegmentToITK([subid], '/home/projects/anatomy/[subid]/t1_acpc.nii.gz')
csos9wqvtla1s2tgfa1nqyra0mnzgbq
130
129
2015-11-06T19:44:27Z
Pdonnelly
2
/* Freesurfer Segmentation */
wikitext
text/x-wiki
__TOC__
We collect a high resolution T1-weighted image on every subject, and use this image to define the coordinate space for all subsequent analyses. This section describes the processing steps for a subject's T1-weighted anatomy and should be performed before analyzing the rest of their MRI data.
==AC-PC Aligned Nifti Image==
[[File:Ac-pc.jpg|300px|right]]
Data can come off the scanner with arbitrary header information and in parrec format. So for each subject we start by defining a coordinate frame where 0,0,0 is at the anterior commissure, the anterior and posterior commissure are in the same X and Z planes, and the mid-line is centered in the image. Bob Dougherty wrote a nice tool to help with this. See [https://github.com/vistalab/vistasoft/blob/master/mrAnatomy/VolumeUtilities/mrAnatAverageAcpcNifti.m mrAnatAverageAcpcNifti]. The subject's T1-weighted image should be ac-pc aligned, resliced (preserving its resolution), and saved in the subject's anatomy directory.
Step 1: In a terminal, convert the PAR/REC files to nifti images
cd /mnt/diskArray/projects/MRI/[subid]
parrec2nii -c -b *.PAR
Step 2: In MATLAB compute the root mean squared (RMS) image and then ac-pc align. If a subject has multiple images then the rms operation should be run on each image and then a cell-array with paths to all the images can be pushed through mrAnatAverageAcpcNifti resulting in a very nice anatomy
T1path = 'Path to t1 weighted image';
T1path = mri_rms(T1path); % Root mean squared image
im = niftiRead(T1path); % Read root mean squared image
voxres = diag(im.qto_xyz)'; % Get the voxel resolution of the image (mm)
mrAnatAverageAcpcNifti({T1path}, '/mnt/diskArray/projects/anatomy/[subid]/t1_acpc.nii.gz', [], voxres(1:3))
==Freesurfer Segmentation==
Freesurfer is a useful tool for segmenting a T1-weighted image and building a cortical mesh. To segment the subject's T1-weighted image using freesurfer from the command line type:
recon-all -i /home/projects/anatomy/[subid]/t1_acpc_.nii.gz -subjid [subid] -all
Or even better use this handy matlab function written by Jon Winawer to run freesurfer and then also build some useful files that we like to use for data visualization such as a high resolution gray/white segmentation.
fs_autosegmentToITK([subid], '/mnt/diskArray/projects/anatomy/[subid]/t1_acpc.nii.gz')
germq6s145e51agwrev4949lpd1rxdc
131
130
2015-11-06T19:44:58Z
Pdonnelly
2
/* Freesurfer Segmentation */
wikitext
text/x-wiki
__TOC__
We collect a high resolution T1-weighted image on every subject, and use this image to define the coordinate space for all subsequent analyses. This section describes the processing steps for a subject's T1-weighted anatomy and should be performed before analyzing the rest of their MRI data.
==AC-PC Aligned Nifti Image==
[[File:Ac-pc.jpg|300px|right]]
Data can come off the scanner with arbitrary header information and in parrec format. So for each subject we start by defining a coordinate frame where 0,0,0 is at the anterior commissure, the anterior and posterior commissure are in the same X and Z planes, and the mid-line is centered in the image. Bob Dougherty wrote a nice tool to help with this. See [https://github.com/vistalab/vistasoft/blob/master/mrAnatomy/VolumeUtilities/mrAnatAverageAcpcNifti.m mrAnatAverageAcpcNifti]. The subject's T1-weighted image should be ac-pc aligned, resliced (preserving its resolution), and saved in the subject's anatomy directory.
Step 1: In a terminal, convert the PAR/REC files to nifti images
cd /mnt/diskArray/projects/MRI/[subid]
parrec2nii -c -b *.PAR
Step 2: In MATLAB compute the root mean squared (RMS) image and then ac-pc align. If a subject has multiple images then the rms operation should be run on each image and then a cell-array with paths to all the images can be pushed through mrAnatAverageAcpcNifti resulting in a very nice anatomy
T1path = 'Path to t1 weighted image';
T1path = mri_rms(T1path); % Root mean squared image
im = niftiRead(T1path); % Read root mean squared image
voxres = diag(im.qto_xyz)'; % Get the voxel resolution of the image (mm)
mrAnatAverageAcpcNifti({T1path}, '/mnt/diskArray/projects/anatomy/[subid]/t1_acpc.nii.gz', [], voxres(1:3))
==Freesurfer Segmentation==
Freesurfer is a useful tool for segmenting a T1-weighted image and building a cortical mesh. To segment the subject's T1-weighted image using freesurfer from the command line type:
recon-all -i /mnt/diskArray/projects/anatomy/[subid]/t1_acpc_.nii.gz -subjid [subid] -all
Or even better use this handy matlab function written by Jon Winawer to run freesurfer and then also build some useful files that we like to use for data visualization such as a high resolution gray/white segmentation.
fs_autosegmentToITK([subid], '/mnt/diskArray/projects/anatomy/[subid]/t1_acpc.nii.gz')
64hofr0bzmxqbh5v75gp19qn2058o4y
132
131
2015-11-06T19:48:32Z
Pdonnelly
2
/* Freesurfer Segmentation */
wikitext
text/x-wiki
__TOC__
We collect a high resolution T1-weighted image on every subject, and use this image to define the coordinate space for all subsequent analyses. This section describes the processing steps for a subject's T1-weighted anatomy and should be performed before analyzing the rest of their MRI data.
==AC-PC Aligned Nifti Image==
[[File:Ac-pc.jpg|300px|right]]
Data can come off the scanner with arbitrary header information and in parrec format. So for each subject we start by defining a coordinate frame where 0,0,0 is at the anterior commissure, the anterior and posterior commissure are in the same X and Z planes, and the mid-line is centered in the image. Bob Dougherty wrote a nice tool to help with this. See [https://github.com/vistalab/vistasoft/blob/master/mrAnatomy/VolumeUtilities/mrAnatAverageAcpcNifti.m mrAnatAverageAcpcNifti]. The subject's T1-weighted image should be ac-pc aligned, resliced (preserving its resolution), and saved in the subject's anatomy directory.
Step 1: In a terminal, convert the PAR/REC files to nifti images
cd /mnt/diskArray/projects/MRI/[subid]
parrec2nii -c -b *.PAR
Step 2: In MATLAB compute the root mean squared (RMS) image and then ac-pc align. If a subject has multiple images then the rms operation should be run on each image and then a cell-array with paths to all the images can be pushed through mrAnatAverageAcpcNifti resulting in a very nice anatomy
T1path = 'Path to t1 weighted image';
T1path = mri_rms(T1path); % Root mean squared image
im = niftiRead(T1path); % Read root mean squared image
voxres = diag(im.qto_xyz)'; % Get the voxel resolution of the image (mm)
mrAnatAverageAcpcNifti({T1path}, '/mnt/diskArray/projects/anatomy/[subid]/t1_acpc.nii.gz', [], voxres(1:3))
==Freesurfer Segmentation==
Freesurfer is a useful tool for segmenting a T1-weighted image and building a cortical mesh. To segment the subject's T1-weighted image using freesurfer from the command line type:
recon-all -i /mnt/diskArray/projects/anatomy/[subid]/t1_acpc_.nii.gz -subjid [subid] -all
Or even better use this handy matlab function written by Jon Winawer to run freesurfer and then also build some useful files that we like to use for data visualization such as a high resolution gray/white segmentation.
fs_autosegmentToITK([subid], '/mnt/diskArray/projects/anatomy/[subid]/t1_acpc.nii.gz')
For subjects with multiple scans, the output destination should be [subid]_MR[#]. For example, a subject's (205_AB) second scan for the NLR study would be written: NLR_205_AB_MR2. This is in line with the Freesurfer system of notation.
fjurycp4kxkk4wn5ycw4kno4ivg8ktj
177
132
2015-11-20T20:30:05Z
Pdonnelly
2
/* AC-PC Aligned Nifti Image */
wikitext
text/x-wiki
__TOC__
We collect a high resolution T1-weighted image on every subject, and use this image to define the coordinate space for all subsequent analyses. This section describes the processing steps for a subject's T1-weighted anatomy and should be performed before analyzing the rest of their MRI data.
==AC-PC Aligned Nifti Image==
[[File:Ac-pc.jpg|300px|right]]
Data can come off the scanner with arbitrary header information and in parrec format. So for each subject we start by defining a coordinate frame where 0,0,0 is at the anterior commissure, the anterior and posterior commissure are in the same X and Z planes, and the mid-line is centered in the image. Bob Dougherty wrote a nice tool to help with this. See [https://github.com/vistalab/vistasoft/blob/master/mrAnatomy/VolumeUtilities/mrAnatAverageAcpcNifti.m mrAnatAverageAcpcNifti]. The subject's T1-weighted image should be ac-pc aligned, resliced (preserving its resolution), and saved in the subject's anatomy directory.
Step 1: In a terminal, convert the PAR/REC files to nifti images
cd /mnt/diskArray/projects/MRI/[subid]
parrec2nii -c -b *.PAR
Step 2: In MATLAB compute the root mean squared (RMS) image and then ac-pc align. If a subject has multiple images then the rms operation should be run on each image and then a cell-array with paths to all the images can be pushed through mrAnatAverageAcpcNifti resulting in a very nice anatomy
T1path = 'Path to t1 weighted image';
T1path = mri_rms(T1path); % Root mean squared image
im = niftiRead(T1path); % Read root mean squared image
voxres = diag(im.qto_xyz)'; % Get the voxel resolution of the image (mm)
mrAnatAverageAcpcNifti({T1path}, '/mnt/diskArray/projects/anatomy/[subid]/t1_acpc.nii.gz', [], voxres(1:3)')
==Freesurfer Segmentation==
Freesurfer is a useful tool for segmenting a T1-weighted image and building a cortical mesh. To segment the subject's T1-weighted image using freesurfer from the command line type:
recon-all -i /mnt/diskArray/projects/anatomy/[subid]/t1_acpc_.nii.gz -subjid [subid] -all
Or even better use this handy matlab function written by Jon Winawer to run freesurfer and then also build some useful files that we like to use for data visualization such as a high resolution gray/white segmentation.
fs_autosegmentToITK([subid], '/mnt/diskArray/projects/anatomy/[subid]/t1_acpc.nii.gz')
For subjects with multiple scans, the output destination should be [subid]_MR[#]. For example, a subject's (205_AB) second scan for the NLR study would be written: NLR_205_AB_MR2. This is in line with the Freesurfer system of notation.
4tth0gbe4h4ilnre0yxo08qa6st49ct
183
177
2015-12-02T02:05:33Z
Pdonnelly
2
/* Freesurfer Segmentation */
wikitext
text/x-wiki
__TOC__
We collect a high resolution T1-weighted image on every subject, and use this image to define the coordinate space for all subsequent analyses. This section describes the processing steps for a subject's T1-weighted anatomy and should be performed before analyzing the rest of their MRI data.
==AC-PC Aligned Nifti Image==
[[File:Ac-pc.jpg|300px|right]]
Data can come off the scanner with arbitrary header information and in parrec format. So for each subject we start by defining a coordinate frame where 0,0,0 is at the anterior commissure, the anterior and posterior commissure are in the same X and Z planes, and the mid-line is centered in the image. Bob Dougherty wrote a nice tool to help with this. See [https://github.com/vistalab/vistasoft/blob/master/mrAnatomy/VolumeUtilities/mrAnatAverageAcpcNifti.m mrAnatAverageAcpcNifti]. The subject's T1-weighted image should be ac-pc aligned, resliced (preserving its resolution), and saved in the subject's anatomy directory.
Step 1: In a terminal, convert the PAR/REC files to nifti images
cd /mnt/diskArray/projects/MRI/[subid]
parrec2nii -c -b *.PAR
Step 2: In MATLAB compute the root mean squared (RMS) image and then ac-pc align. If a subject has multiple images then the rms operation should be run on each image and then a cell-array with paths to all the images can be pushed through mrAnatAverageAcpcNifti resulting in a very nice anatomy
T1path = 'Path to t1 weighted image';
T1path = mri_rms(T1path); % Root mean squared image
im = niftiRead(T1path); % Read root mean squared image
voxres = diag(im.qto_xyz)'; % Get the voxel resolution of the image (mm)
mrAnatAverageAcpcNifti({T1path}, '/mnt/diskArray/projects/anatomy/[subid]/t1_acpc.nii.gz', [], voxres(1:3)')
==Freesurfer Segmentation==
Freesurfer is a useful tool for segmenting a T1-weighted image and building a cortical mesh. To segment the subject's T1-weighted image using freesurfer from the command line type:
recon-all -i /mnt/diskArray/projects/anatomy/[subid]/t1_acpc.nii.gz -subjid [subid] -all
Or even better use this handy matlab function written by Jon Winawer to run freesurfer and then also build some useful files that we like to use for data visualization such as a high resolution gray/white segmentation.
fs_autosegmentToITK([subid], '/mnt/diskArray/projects/anatomy/[subid]/t1_acpc.nii.gz')
For subjects with multiple scans, the output destination should be [subid]_MR[#]. For example, a subject's (205_AB) second scan for the NLR study would be written: NLR_205_AB_MR2. This is in line with the Freesurfer system of notation.
6yz6veto85mh7aq5ffn09i86giz7jhy
200
183
2015-12-03T22:14:38Z
Pdonnelly
2
/* AC-PC Aligned Nifti Image */
wikitext
text/x-wiki
__TOC__
We collect a high resolution T1-weighted image on every subject, and use this image to define the coordinate space for all subsequent analyses. This section describes the processing steps for a subject's T1-weighted anatomy and should be performed before analyzing the rest of their MRI data.
==AC-PC Aligned Nifti Image==
[[File:Ac-pc.jpg|300px|right]]
Data can come off the scanner with arbitrary header information and in parrec format. So for each subject we start by defining a coordinate frame where 0,0,0 is at the anterior commissure, the anterior and posterior commissure are in the same X and Z planes, and the mid-line is centered in the image. Bob Dougherty wrote a nice tool to help with this. See [https://github.com/vistalab/vistasoft/blob/master/mrAnatomy/VolumeUtilities/mrAnatAverageAcpcNifti.m mrAnatAverageAcpcNifti]. The subject's T1-weighted image should be ac-pc aligned, resliced (preserving its resolution), and saved in the subject's anatomy directory.<br>
Step 1: In a terminal, convert the PAR/REC files to nifti images
cd /mnt/diskArray/projects/MRI/[subid]
parrec2nii -c -b *.PAR
Step 2: In MATLAB compute the root mean squared (RMS) image and then ac-pc align. If a subject has multiple images then the rms operation should be run on each image and then a cell-array with paths to all the images can be pushed through mrAnatAverageAcpcNifti resulting in a very nice anatomy
T1path = 'Path to t1 weighted image';
T1path = mri_rms(T1path); % Root mean squared image
im = niftiRead(T1path); % Read root mean squared image
voxres = diag(im.qto_xyz)'; % Get the voxel resolution of the image (mm)
mrAnatAverageAcpcNifti({T1path}, '/mnt/diskArray/projects/anatomy/[subid]/t1_acpc.nii.gz', [], voxres(1:3)')
==Freesurfer Segmentation==
Freesurfer is a useful tool for segmenting a T1-weighted image and building a cortical mesh. To segment the subject's T1-weighted image using freesurfer from the command line type:
recon-all -i /mnt/diskArray/projects/anatomy/[subid]/t1_acpc.nii.gz -subjid [subid] -all
Or even better use this handy matlab function written by Jon Winawer to run freesurfer and then also build some useful files that we like to use for data visualization such as a high resolution gray/white segmentation.
fs_autosegmentToITK([subid], '/mnt/diskArray/projects/anatomy/[subid]/t1_acpc.nii.gz')
For subjects with multiple scans, the output destination should be [subid]_MR[#]. For example, a subject's (205_AB) second scan for the NLR study would be written: NLR_205_AB_MR2. This is in line with the Freesurfer system of notation.
jspe2u9se4yw47qstnfh7z1ssxx3iot
201
200
2015-12-03T22:19:55Z
Pdonnelly
2
wikitext
text/x-wiki
__TOC__
We collect a high resolution T1-weighted image on every subject, and use this image to define the coordinate space for all subsequent analyses. This section describes the processing steps for a subject's T1-weighted anatomy and should be performed before analyzing the rest of their MRI data.
==Data Organization/Naming Convention==
Step 1: Create a subject ID folder within the anatomy directory
NLR_###_FL_MR#
For Longitudinal scans, you will want to create a folder for each individual scan, as well as a general directory NLR_###_FL to store an AC-PC aligned image of the total average across scans.
==AC-PC Aligned Nifti Image==
[[File:Ac-pc.jpg|300px|right]]
Data can come off the scanner with arbitrary header information and in parrec format. So for each subject we start by defining a coordinate frame where 0,0,0 is at the anterior commissure, the anterior and posterior commissure are in the same X and Z planes, and the mid-line is centered in the image. Bob Dougherty wrote a nice tool to help with this. See [https://github.com/vistalab/vistasoft/blob/master/mrAnatomy/VolumeUtilities/mrAnatAverageAcpcNifti.m mrAnatAverageAcpcNifti]. The subject's T1-weighted image should be ac-pc aligned, resliced (preserving its resolution), and saved in the subject's anatomy directory.<br>
Step 1: In a terminal, convert the PAR/REC files to nifti images
cd /mnt/diskArray/projects/MRI/[subid]
parrec2nii -c -b *.PAR
Step 2: In MATLAB compute the root mean squared (RMS) image and then ac-pc align. If a subject has multiple images then the rms operation should be run on each image and then a cell-array with paths to all the images can be pushed through mrAnatAverageAcpcNifti resulting in a very nice anatomy
T1path = 'Path to t1 weighted image';
T1path = mri_rms(T1path); % Root mean squared image
im = niftiRead(T1path); % Read root mean squared image
voxres = diag(im.qto_xyz)'; % Get the voxel resolution of the image (mm)
mrAnatAverageAcpcNifti({T1path}, '/mnt/diskArray/projects/anatomy/[subid]/t1_acpc.nii.gz', [], voxres(1:3)')
==Freesurfer Segmentation==
Freesurfer is a useful tool for segmenting a T1-weighted image and building a cortical mesh. To segment the subject's T1-weighted image using freesurfer from the command line type:
recon-all -i /mnt/diskArray/projects/anatomy/[subid]/t1_acpc.nii.gz -subjid [subid] -all
Or even better use this handy matlab function written by Jon Winawer to run freesurfer and then also build some useful files that we like to use for data visualization such as a high resolution gray/white segmentation.
fs_autosegmentToITK([subid], '/mnt/diskArray/projects/anatomy/[subid]/t1_acpc.nii.gz')
For subjects with multiple scans, the output destination should be [subid]_MR[#]. For example, a subject's (205_AB) second scan for the NLR study would be written: NLR_205_AB_MR2. This is in line with the Freesurfer system of notation.
8axsrka9wqilfz8utylrqm6eowjoqfg
202
201
2015-12-03T22:20:38Z
Pdonnelly
2
/* Data Organization/Naming Convention */
wikitext
text/x-wiki
__TOC__
We collect a high resolution T1-weighted image on every subject, and use this image to define the coordinate space for all subsequent analyses. This section describes the processing steps for a subject's T1-weighted anatomy and should be performed before analyzing the rest of their MRI data.
==Data Organization/Naming Convention==
Step 1: Create a subject ID folder within the anatomy directory
NLR_###_FL_MR#
For Longitudinal scans, you will want to create a folder for each individual scan, as well as a general directory NLR_###_FL to store an AC-PC aligned image of the total average across scans.
==AC-PC Aligned Nifti Image==
[[File:Ac-pc.jpg|300px|right]]
Data can come off the scanner with arbitrary header information and in parrec format. So for each subject we start by defining a coordinate frame where 0,0,0 is at the anterior commissure, the anterior and posterior commissure are in the same X and Z planes, and the mid-line is centered in the image. Bob Dougherty wrote a nice tool to help with this. See [https://github.com/vistalab/vistasoft/blob/master/mrAnatomy/VolumeUtilities/mrAnatAverageAcpcNifti.m mrAnatAverageAcpcNifti]. The subject's T1-weighted image should be ac-pc aligned, resliced (preserving its resolution), and saved in the subject's anatomy directory.<br>
Step 1: In a terminal, convert the PAR/REC files to nifti images
cd /mnt/diskArray/projects/MRI/[subid]
parrec2nii -c -b *.PAR
Step 2: In MATLAB compute the root mean squared (RMS) image and then ac-pc align. If a subject has multiple images then the rms operation should be run on each image and then a cell-array with paths to all the images can be pushed through mrAnatAverageAcpcNifti resulting in a very nice anatomy
T1path = 'Path to t1 weighted image';
T1path = mri_rms(T1path); % Root mean squared image
im = niftiRead(T1path); % Read root mean squared image
voxres = diag(im.qto_xyz)'; % Get the voxel resolution of the image (mm)
mrAnatAverageAcpcNifti({T1path}, '/mnt/diskArray/projects/anatomy/[subid]/t1_acpc.nii.gz', [], voxres(1:3)')
==Freesurfer Segmentation==
Freesurfer is a useful tool for segmenting a T1-weighted image and building a cortical mesh. To segment the subject's T1-weighted image using freesurfer from the command line type:
recon-all -i /mnt/diskArray/projects/anatomy/[subid]/t1_acpc.nii.gz -subjid [subid] -all
Or even better use this handy matlab function written by Jon Winawer to run freesurfer and then also build some useful files that we like to use for data visualization such as a high resolution gray/white segmentation.
fs_autosegmentToITK([subid], '/mnt/diskArray/projects/anatomy/[subid]/t1_acpc.nii.gz')
For subjects with multiple scans, the output destination should be [subid]_MR[#]. For example, a subject's (205_AB) second scan for the NLR study would be written: NLR_205_AB_MR2. This is in line with the Freesurfer system of notation.
aixf3tgjp2byubqmm605ol7x21cn0f2
203
202
2015-12-03T22:25:43Z
Pdonnelly
2
/* Data Organization/Naming Convention */
wikitext
text/x-wiki
__TOC__
We collect a high resolution T1-weighted image on every subject, and use this image to define the coordinate space for all subsequent analyses. This section describes the processing steps for a subject's T1-weighted anatomy and should be performed before analyzing the rest of their MRI data.
==Data Organization/Naming Convention==
Step 1: Create a subject ID folder within the anatomy directory
NLR_###_FL_MR#
For Longitudinal scans, you will want to create a folder for each individual scan, as well as a general directory NLR_###_FL to store an AC-PC aligned image of the total average across scans.<br>
==AC-PC Aligned Nifti Image==
[[File:Ac-pc.jpg|300px|right]]
Data can come off the scanner with arbitrary header information and in parrec format. So for each subject we start by defining a coordinate frame where 0,0,0 is at the anterior commissure, the anterior and posterior commissure are in the same X and Z planes, and the mid-line is centered in the image. Bob Dougherty wrote a nice tool to help with this. See [https://github.com/vistalab/vistasoft/blob/master/mrAnatomy/VolumeUtilities/mrAnatAverageAcpcNifti.m mrAnatAverageAcpcNifti]. The subject's T1-weighted image should be ac-pc aligned, resliced (preserving its resolution), and saved in the subject's anatomy directory.<br>
Step 1: In a terminal, convert the PAR/REC files to nifti images
cd /mnt/diskArray/projects/MRI/[subid]
parrec2nii -c -b *.PAR
Step 2: In MATLAB compute the root mean squared (RMS) image and then ac-pc align. If a subject has multiple images then the rms operation should be run on each image and then a cell-array with paths to all the images can be pushed through mrAnatAverageAcpcNifti resulting in a very nice anatomy
T1path = 'Path to t1 weighted image';
T1path = mri_rms(T1path); % Root mean squared image
im = niftiRead(T1path); % Read root mean squared image
voxres = diag(im.qto_xyz)'; % Get the voxel resolution of the image (mm)
mrAnatAverageAcpcNifti({T1path}, '/mnt/diskArray/projects/anatomy/[subid]/t1_acpc.nii.gz', [], voxres(1:3)')
==Freesurfer Segmentation==
Freesurfer is a useful tool for segmenting a T1-weighted image and building a cortical mesh. To segment the subject's T1-weighted image using freesurfer from the command line type:
recon-all -i /mnt/diskArray/projects/anatomy/[subid]/t1_acpc.nii.gz -subjid [subid] -all
Or even better use this handy matlab function written by Jon Winawer to run freesurfer and then also build some useful files that we like to use for data visualization such as a high resolution gray/white segmentation.
fs_autosegmentToITK([subid], '/mnt/diskArray/projects/anatomy/[subid]/t1_acpc.nii.gz')
For subjects with multiple scans, the output destination should be [subid]_MR[#]. For example, a subject's (205_AB) second scan for the NLR study would be written: NLR_205_AB_MR2. This is in line with the Freesurfer system of notation.
3ufyq7kbr96orni1afgkq78san6717h
204
203
2015-12-03T22:27:38Z
Pdonnelly
2
/* Data Organization/Naming Convention */
wikitext
text/x-wiki
__TOC__
We collect a high resolution T1-weighted image on every subject, and use this image to define the coordinate space for all subsequent analyses. This section describes the processing steps for a subject's T1-weighted anatomy and should be performed before analyzing the rest of their MRI data.
==Data Organization/Naming Convention==
Step 1: Create a subject ID folder within the anatomy directory
NLR_###_FL_MR#
For Longitudinal scans, you will want to create a folder for each individual scan, as well as a general directory NLR_###_FL to store an AC-PC aligned image of the total average across scans.<br>
[subid] --seen below-- will include the _MR# for each individual scan.
==AC-PC Aligned Nifti Image==
[[File:Ac-pc.jpg|300px|right]]
Data can come off the scanner with arbitrary header information and in parrec format. So for each subject we start by defining a coordinate frame where 0,0,0 is at the anterior commissure, the anterior and posterior commissure are in the same X and Z planes, and the mid-line is centered in the image. Bob Dougherty wrote a nice tool to help with this. See [https://github.com/vistalab/vistasoft/blob/master/mrAnatomy/VolumeUtilities/mrAnatAverageAcpcNifti.m mrAnatAverageAcpcNifti]. The subject's T1-weighted image should be ac-pc aligned, resliced (preserving its resolution), and saved in the subject's anatomy directory.<br>
Step 1: In a terminal, convert the PAR/REC files to nifti images
cd /mnt/diskArray/projects/MRI/[subid]
parrec2nii -c -b *.PAR
Step 2: In MATLAB compute the root mean squared (RMS) image and then ac-pc align. If a subject has multiple images then the rms operation should be run on each image and then a cell-array with paths to all the images can be pushed through mrAnatAverageAcpcNifti resulting in a very nice anatomy
T1path = 'Path to t1 weighted image';
T1path = mri_rms(T1path); % Root mean squared image
im = niftiRead(T1path); % Read root mean squared image
voxres = diag(im.qto_xyz)'; % Get the voxel resolution of the image (mm)
mrAnatAverageAcpcNifti({T1path}, '/mnt/diskArray/projects/anatomy/[subid]/t1_acpc.nii.gz', [], voxres(1:3)')
==Freesurfer Segmentation==
Freesurfer is a useful tool for segmenting a T1-weighted image and building a cortical mesh. To segment the subject's T1-weighted image using freesurfer from the command line type:
recon-all -i /mnt/diskArray/projects/anatomy/[subid]/t1_acpc.nii.gz -subjid [subid] -all
Or even better use this handy matlab function written by Jon Winawer to run freesurfer and then also build some useful files that we like to use for data visualization such as a high resolution gray/white segmentation.
fs_autosegmentToITK([subid], '/mnt/diskArray/projects/anatomy/[subid]/t1_acpc.nii.gz')
For subjects with multiple scans, the output destination should be [subid]_MR[#]. For example, a subject's (205_AB) second scan for the NLR study would be written: NLR_205_AB_MR2. This is in line with the Freesurfer system of notation.
h8mve3f729qplgxjd9zlviqho4ml433
Behavioral
0
15
85
2015-10-28T22:10:19Z
Pdonnelly
2
Created page with "=Reading Battery="
wikitext
text/x-wiki
=Reading Battery=
n1ja1etk8frwwm7nck1matlpqh9atpp
86
85
2015-10-28T22:15:09Z
Pdonnelly
2
wikitext
text/x-wiki
=Reading Battery=
We use a variety of standardized measures of reading aptitude as a part of our work in the BDE Lab. Below is a list of the measures we utilize, a description of their use, and details of their administration.
Woodcock-Johnson IV Tests of Achievement
Test of Word Reading Efficiency-2 (TOWRE-2)
Weschler Abbreviated Scale of Intelligence (WASI)
Comprehensive Test of Phonological Processing-2 (CTOPP-2)
9xws9eeeyow4ph0ix4u7fx34bw0mp8v
87
86
2015-10-28T22:20:27Z
Pdonnelly
2
wikitext
text/x-wiki
=Reading Battery=
We use a variety of standardized measures of reading aptitude as a part of our work in the BDE Lab. Below is a list of the measures we utilize, a description of their use, and details of their administration.
Woodcock-Johnson IV Tests of Achievement (WJ-IV)
Test of Word Reading Efficiency-2 (TOWRE-2)
Weschler Abbreviated Scale of Intelligence (WASI)
Comprehensive Test of Phonological Processing-2 (CTOPP-2)
WJ-IV
The WJ-IV Tests of Achievement contains 20 tests measuring reading, mathematics, written language, and academic knowledge. As a part of the study, only four tests were used, all focusing on basic reading skills and fluency.
*Letter-Word Identification:
Letter-Word Identification measures the examinee’s word identification skills, a reading-writing (Grw) ability. The initial items require the individual to identify letters that appear in large type on the examinee’s side of the Test Book. The remaining items require the person to read aloud individual words correctly. The examinee is not required to know the meaning of any word. The items become increasingly difficult as the selected words appear less frequently in written English.
nm3w5mo2c74z5ia2327c2hsffpdylrr
88
87
2015-10-28T22:23:30Z
Pdonnelly
2
wikitext
text/x-wiki
=Reading Battery=
We use a variety of standardized measures of reading aptitude as a part of our work in the BDE Lab. Below is a list of the measures we utilize, a description of their use, and details of their administration.
Woodcock-Johnson IV Tests of Achievement (WJ-IV)
Test of Word Reading Efficiency-2 (TOWRE-2)
Weschler Abbreviated Scale of Intelligence (WASI)
Comprehensive Test of Phonological Processing-2 (CTOPP-2)
WJ-IV
The WJ-IV Tests of Achievement contains 20 tests measuring reading, mathematics, written language, and academic knowledge.
As a part of the study, only four tests were used, all focusing on basic reading skills and fluency.
<ul type="circle">
<ul>Letter-Word Identification:
Letter-Word Identification measures the examinee’s word identification skills, a reading-writing (Grw) ability. The initial items require the individual to identify letters that appear in large type on the examinee’s side of the Test Book. The remaining items require the person to read aloud individual words correctly. The examinee is not required to know the meaning of any word. The items become increasingly difficult as the selected words appear less frequently in written English.
io5syuwinyi01dko9ndrrm3kg7evjeo
89
88
2015-10-28T22:24:17Z
Pdonnelly
2
wikitext
text/x-wiki
=Reading Battery=
We use a variety of standardized measures of reading aptitude as a part of our work in the BDE Lab. Below is a list of the measures we utilize, a description of their use, and details of their administration.
Woodcock-Johnson IV Tests of Achievement (WJ-IV)
Test of Word Reading Efficiency-2 (TOWRE-2)
Weschler Abbreviated Scale of Intelligence (WASI)
Comprehensive Test of Phonological Processing-2 (CTOPP-2)
WJ-IV
The WJ-IV Tests of Achievement contains 20 tests measuring reading, mathematics, written language, and academic knowledge.
As a part of the study, only four tests were used, all focusing on basic reading skills and fluency.
<ul type="circle">
<ul>Letter-Word Identification:
Letter-Word Identification measures the examinee’s word identification skills, a reading-writing (Grw) ability. The initial items require the individual to identify letters that appear in large type on the examinee’s side of the Test Book. The remaining items require the person to read aloud individual words correctly. The examinee is not required to know the meaning of any word. The items become increasingly difficult as the selected words appear less frequently in written English.
ohwc5mokn5qlcao5lw061atrq4mev3w
90
89
2015-10-28T22:24:47Z
Pdonnelly
2
wikitext
text/x-wiki
=Reading Battery=
We use a variety of standardized measures of reading aptitude as a part of our work in the BDE Lab. Below is a list of the measures we utilize, a description of their use, and details of their administration.
Woodcock-Johnson IV Tests of Achievement (WJ-IV)
Test of Word Reading Efficiency-2 (TOWRE-2)
Weschler Abbreviated Scale of Intelligence (WASI)
Comprehensive Test of Phonological Processing-2 (CTOPP-2)
WJ-IV
The WJ-IV Tests of Achievement contains 20 tests measuring reading, mathematics, written language, and academic knowledge.
As a part of the study, only four tests were used, all focusing on basic reading skills and fluency.
<ul type="circle">
<li>Letter-Word Identification:
Letter-Word Identification measures the examinee’s word identification skills, a reading-writing (Grw) ability. The initial items require the individual to identify letters that appear in large type on the examinee’s side of the Test Book. The remaining items require the person to read aloud individual words correctly. The examinee is not required to know the meaning of any word. The items become increasingly difficult as the selected words appear less frequently in written English.
ebatviuuf05gsoc2fc95g4o92yd66g0
91
90
2015-10-28T22:26:43Z
Pdonnelly
2
wikitext
text/x-wiki
=Reading Battery=
We use a variety of standardized measures of reading aptitude as a part of our work in the BDE Lab. Below is a list of the measures we utilize, a description of their use, and details of their administration.
Woodcock-Johnson IV Tests of Achievement (WJ-IV)
Test of Word Reading Efficiency-2 (TOWRE-2)
Weschler Abbreviated Scale of Intelligence (WASI)
Comprehensive Test of Phonological Processing-2 (CTOPP-2)
WJ-IV
The WJ-IV Tests of Achievement contains 20 tests measuring reading, mathematics, written language, and academic knowledge.
As a part of the study, only four tests were used, all focusing on basic reading skills and fluency.
<ul type="circle">
<li>Letter-Word Identification:
Letter-Word Identification measures the examinee’s word identification skills, a reading-writing (Grw) ability. The initial items require the individual to identify letters that appear in large type on the examinee’s side of the Test Book. The remaining items require the person to read aloud individual words correctly. The examinee is not required to know the meaning of any word. The items become increasingly difficult as the selected words appear less frequently in written English. </li>
<li>Word Attack: Word Attack measures a person’s ability to apply phonic and structural analysis skills to the pronunciation of unfamiliar printed words, a reading-writing (Grw) ability. The initial items require the individual to produce the sounds for single letters. The remaining items require the person to read aloud letter combinations that are phonically consistent or are regular patterns in English orthography but are nonsense or low-frequency words. The items become more difficult as the complexity of the nonsense words increases.</li>
<li>Oral Reading: Oral Reading is a measure of story reading accuracy and prosody, a reading-writing (Grw) ability. The individual reads aloud sentences that gradually increase in difficulty. Performance is scored for both accuracy and fluency of expression.</li>
<li>Sentence Reading Fluency: Sentence Reading Fluency measures reading rate, requiring both reading-writing (Grw) and cognitive processing speed (Gs) abilities. The task involves reading simple sentences silently and quickly in the Response Booklet, deciding if the statement is true or false, and then circling Yes or No. The difficulty level of the sentences gradually increases to a moderate level. The individual attempts to complete as many items as possible within a 3-minute time limit.</li>
0j69vuvpn8ns33cr8uunpz6m1w9vj9e
92
91
2015-10-28T22:27:34Z
Pdonnelly
2
wikitext
text/x-wiki
=Reading Battery=
We use a variety of standardized measures of reading aptitude as a part of our work in the BDE Lab. Below is a list of the measures we utilize, a description of their use, and details of their administration.
Woodcock-Johnson IV Tests of Achievement (WJ-IV)
Test of Word Reading Efficiency-2 (TOWRE-2)
Weschler Abbreviated Scale of Intelligence (WASI)
Comprehensive Test of Phonological Processing-2 (CTOPP-2)
WJ-IV <br>
The WJ-IV Tests of Achievement contains 20 tests measuring reading, mathematics, written language, and academic knowledge.
As a part of the study, only four tests were used, all focusing on basic reading skills and fluency.
<ul type="circle">
<li>Letter-Word Identification:
Letter-Word Identification measures the examinee’s word identification skills, a reading-writing (Grw) ability. The initial items require the individual to identify letters that appear in large type on the examinee’s side of the Test Book. The remaining items require the person to read aloud individual words correctly. The examinee is not required to know the meaning of any word. The items become increasingly difficult as the selected words appear less frequently in written English. </li>
<li>Word Attack: Word Attack measures a person’s ability to apply phonic and structural analysis skills to the pronunciation of unfamiliar printed words, a reading-writing (Grw) ability. The initial items require the individual to produce the sounds for single letters. The remaining items require the person to read aloud letter combinations that are phonically consistent or are regular patterns in English orthography but are nonsense or low-frequency words. The items become more difficult as the complexity of the nonsense words increases.</li>
<li>Oral Reading: Oral Reading is a measure of story reading accuracy and prosody, a reading-writing (Grw) ability. The individual reads aloud sentences that gradually increase in difficulty. Performance is scored for both accuracy and fluency of expression.</li>
<li>Sentence Reading Fluency: Sentence Reading Fluency measures reading rate, requiring both reading-writing (Grw) and cognitive processing speed (Gs) abilities. The task involves reading simple sentences silently and quickly in the Response Booklet, deciding if the statement is true or false, and then circling Yes or No. The difficulty level of the sentences gradually increases to a moderate level. The individual attempts to complete as many items as possible within a 3-minute time limit.</li>
3z4qxezaqe2k1dp69fh7o2tg8sx76jn
93
92
2015-10-28T22:28:39Z
Pdonnelly
2
wikitext
text/x-wiki
=Reading Battery=
We use a variety of standardized measures of reading aptitude as a part of our work in the BDE Lab. Below is a list of the measures we utilize, a description of their use, and details of their administration.
Woodcock-Johnson IV Tests of Achievement (WJ-IV)
Test of Word Reading Efficiency-2 (TOWRE-2)
Weschler Abbreviated Scale of Intelligence (WASI)
Comprehensive Test of Phonological Processing-2 (CTOPP-2)
WJ-IV <br>
The WJ-IV Tests of Achievement contains 20 tests measuring reading, mathematics, written language, and academic knowledge.
As a part of the study, only four tests were used, all focusing on basic reading skills and fluency.
<ul type="circle">
<li>'''Letter-Word Identification''':
Letter-Word Identification measures the examinee’s word identification skills, a reading-writing (Grw) ability. The initial items require the individual to identify letters that appear in large type on the examinee’s side of the Test Book. The remaining items require the person to read aloud individual words correctly. The examinee is not required to know the meaning of any word. The items become increasingly difficult as the selected words appear less frequently in written English. </li>
<li>'''Word Attack''': Word Attack measures a person’s ability to apply phonic and structural analysis skills to the pronunciation of unfamiliar printed words, a reading-writing (Grw) ability. The initial items require the individual to produce the sounds for single letters. The remaining items require the person to read aloud letter combinations that are phonically consistent or are regular patterns in English orthography but are nonsense or low-frequency words. The items become more difficult as the complexity of the nonsense words increases.</li>
<li>'''Oral Reading''': Oral Reading is a measure of story reading accuracy and prosody, a reading-writing (Grw) ability. The individual reads aloud sentences that gradually increase in difficulty. Performance is scored for both accuracy and fluency of expression.</li>
<li>'''Sentence Reading Fluency''': Sentence Reading Fluency measures reading rate, requiring both reading-writing (Grw) and cognitive processing speed (Gs) abilities. The task involves reading simple sentences silently and quickly in the Response Booklet, deciding if the statement is true or false, and then circling Yes or No. The difficulty level of the sentences gradually increases to a moderate level. The individual attempts to complete as many items as possible within a 3-minute time limit.</li>
rp44kfem4pb56l0jp82lgfes3b5posq
94
93
2015-10-28T22:30:58Z
Pdonnelly
2
wikitext
text/x-wiki
=Reading Battery=
We use a variety of standardized measures of reading aptitude as a part of our work in the BDE Lab. Below is a list of the measures we utilize, a description of their use, and details of their administration.
Woodcock-Johnson IV Tests of Achievement (WJ-IV)
Test of Word Reading Efficiency-2 (TOWRE-2)
Weschler Abbreviated Scale of Intelligence (WASI)
Comprehensive Test of Phonological Processing-2 (CTOPP-2)
WJ-IV <br>
The WJ-IV Tests of Achievement contains 20 tests measuring reading, mathematics, written language, and academic knowledge.
As a part of the study, only four tests were used, all focusing on basic reading skills and fluency.
<ul type="circle">
<li>'''Letter-Word Identification''':
Letter-Word Identification measures the examinee’s word identification skills, a reading-writing (Grw) ability. The initial items require the individual to identify letters that appear in large type on the examinee’s side of the Test Book. The remaining items require the person to read aloud individual words correctly. The examinee is not required to know the meaning of any word. The items become increasingly difficult as the selected words appear less frequently in written English. </li>
<li>'''Word Attack''': Word Attack measures a person’s ability to apply phonic and structural analysis skills to the pronunciation of unfamiliar printed words, a reading-writing (Grw) ability. The initial items require the individual to produce the sounds for single letters. The remaining items require the person to read aloud letter combinations that are phonically consistent or are regular patterns in English orthography but are nonsense or low-frequency words. The items become more difficult as the complexity of the nonsense words increases.</li>
<li>'''Oral Reading''': Oral Reading is a measure of story reading accuracy and prosody, a reading-writing (Grw) ability. The individual reads aloud sentences that gradually increase in difficulty. Performance is scored for both accuracy and fluency of expression.</li>
<li>'''Sentence Reading Fluency''': Sentence Reading Fluency measures reading rate, requiring both reading-writing (Grw) and cognitive processing speed (Gs) abilities. The task involves reading simple sentences silently and quickly in the Response Booklet, deciding if the statement is true or false, and then circling Yes or No. The difficulty level of the sentences gradually increases to a moderate level. The individual attempts to complete as many items as possible within a 3-minute time limit.</li>
<br>
<br>
TOWRE-2 <br>
The TOWRE-2 contains two subsets, each of which has four alternate forms.
<li>'''The Sight Word Efficiency''' subtest assesses the number of real words printed in vertical lists that an individual can accurately identify within 45 seconds.</li>
<li>'''Phonemic Decoding Efficiency''' subtest measures the number of pronounceable nonwords presented in vertical lists than an individual can accurately decode within 45 seconds.</li>
44qoyhug6fwp6pi2l9npa09ghj7zow0
95
94
2015-10-28T22:31:27Z
Pdonnelly
2
wikitext
text/x-wiki
=Reading Battery=
We use a variety of standardized measures of reading aptitude as a part of our work in the BDE Lab. Below is a list of the measures we utilize, a description of their use, and details of their administration.
Woodcock-Johnson IV Tests of Achievement (WJ-IV)
Test of Word Reading Efficiency-2 (TOWRE-2)
Weschler Abbreviated Scale of Intelligence (WASI)
Comprehensive Test of Phonological Processing-2 (CTOPP-2)
WJ-IV <br>
The WJ-IV Tests of Achievement contains 20 tests measuring reading, mathematics, written language, and academic knowledge.
As a part of the study, only four tests were used, all focusing on basic reading skills and fluency.
<ul type="circle">
<li>'''Letter-Word Identification''':
Letter-Word Identification measures the examinee’s word identification skills, a reading-writing (Grw) ability. The initial items require the individual to identify letters that appear in large type on the examinee’s side of the Test Book. The remaining items require the person to read aloud individual words correctly. The examinee is not required to know the meaning of any word. The items become increasingly difficult as the selected words appear less frequently in written English. </li>
<li>'''Word Attack''': Word Attack measures a person’s ability to apply phonic and structural analysis skills to the pronunciation of unfamiliar printed words, a reading-writing (Grw) ability. The initial items require the individual to produce the sounds for single letters. The remaining items require the person to read aloud letter combinations that are phonically consistent or are regular patterns in English orthography but are nonsense or low-frequency words. The items become more difficult as the complexity of the nonsense words increases.</li>
<li>'''Oral Reading''': Oral Reading is a measure of story reading accuracy and prosody, a reading-writing (Grw) ability. The individual reads aloud sentences that gradually increase in difficulty. Performance is scored for both accuracy and fluency of expression.</li>
<li>'''Sentence Reading Fluency''': Sentence Reading Fluency measures reading rate, requiring both reading-writing (Grw) and cognitive processing speed (Gs) abilities. The task involves reading simple sentences silently and quickly in the Response Booklet, deciding if the statement is true or false, and then circling Yes or No. The difficulty level of the sentences gradually increases to a moderate level. The individual attempts to complete as many items as possible within a 3-minute time limit.</li>
TOWRE-2 <br>
The TOWRE-2 contains two subsets, each of which has four alternate forms.
<li>'''The Sight Word Efficiency''' subtest assesses the number of real words printed in vertical lists that an individual can accurately identify within 45 seconds.</li>
<li>'''Phonemic Decoding Efficiency''' subtest measures the number of pronounceable nonwords presented in vertical lists than an individual can accurately decode within 45 seconds.</li>
cyd299aqfvzbtpjt995dq1m7pxed21s
96
95
2015-10-28T22:31:53Z
Pdonnelly
2
wikitext
text/x-wiki
=Reading Battery=
We use a variety of standardized measures of reading aptitude as a part of our work in the BDE Lab. Below is a list of the measures we utilize, a description of their use, and details of their administration.
Woodcock-Johnson IV Tests of Achievement (WJ-IV)
Test of Word Reading Efficiency-2 (TOWRE-2)
Weschler Abbreviated Scale of Intelligence (WASI)
Comprehensive Test of Phonological Processing-2 (CTOPP-2)
WJ-IV <br>
The WJ-IV Tests of Achievement contains 20 tests measuring reading, mathematics, written language, and academic knowledge.
As a part of the study, only four tests were used, all focusing on basic reading skills and fluency.
<ul type="circle">
<li>'''Letter-Word Identification''':
Letter-Word Identification measures the examinee’s word identification skills, a reading-writing (Grw) ability. The initial items require the individual to identify letters that appear in large type on the examinee’s side of the Test Book. The remaining items require the person to read aloud individual words correctly. The examinee is not required to know the meaning of any word. The items become increasingly difficult as the selected words appear less frequently in written English. </li>
<li>'''Word Attack''': Word Attack measures a person’s ability to apply phonic and structural analysis skills to the pronunciation of unfamiliar printed words, a reading-writing (Grw) ability. The initial items require the individual to produce the sounds for single letters. The remaining items require the person to read aloud letter combinations that are phonically consistent or are regular patterns in English orthography but are nonsense or low-frequency words. The items become more difficult as the complexity of the nonsense words increases.</li>
<li>'''Oral Reading''': Oral Reading is a measure of story reading accuracy and prosody, a reading-writing (Grw) ability. The individual reads aloud sentences that gradually increase in difficulty. Performance is scored for both accuracy and fluency of expression.</li>
<li>'''Sentence Reading Fluency''': Sentence Reading Fluency measures reading rate, requiring both reading-writing (Grw) and cognitive processing speed (Gs) abilities. The task involves reading simple sentences silently and quickly in the Response Booklet, deciding if the statement is true or false, and then circling Yes or No. The difficulty level of the sentences gradually increases to a moderate level. The individual attempts to complete as many items as possible within a 3-minute time limit.</li>
</ul>
TOWRE-2 <br>
The TOWRE-2 contains two subsets, each of which has four alternate forms.
<li>'''The Sight Word Efficiency''' subtest assesses the number of real words printed in vertical lists that an individual can accurately identify within 45 seconds.</li>
<li>'''Phonemic Decoding Efficiency''' subtest measures the number of pronounceable nonwords presented in vertical lists than an individual can accurately decode within 45 seconds.</li>
t1l088u97x9lntztvfpdivuhzlhc2hh
97
96
2015-10-28T22:36:24Z
Pdonnelly
2
wikitext
text/x-wiki
=Reading Battery=
We use a variety of standardized measures of reading aptitude as a part of our work in the BDE Lab. Below is a list of the measures we utilize, a description of their use, and details of their administration.
Woodcock-Johnson IV Tests of Achievement (WJ-IV)
Test of Word Reading Efficiency-2 (TOWRE-2)
Weschler Abbreviated Scale of Intelligence (WASI)
Comprehensive Test of Phonological Processing-2 (CTOPP-2)
WJ-IV <br>
The WJ-IV Tests of Achievement contains 20 tests measuring reading, mathematics, written language, and academic knowledge.
As a part of the study, only four tests were used, all focusing on basic reading skills and fluency.
<ul type="circle">
<li>'''Letter-Word Identification''':
Letter-Word Identification measures the examinee’s word identification skills, a reading-writing (Grw) ability. The initial items require the individual to identify letters that appear in large type on the examinee’s side of the Test Book. The remaining items require the person to read aloud individual words correctly. The examinee is not required to know the meaning of any word. The items become increasingly difficult as the selected words appear less frequently in written English. </li>
<li>'''Word Attack''': Word Attack measures a person’s ability to apply phonic and structural analysis skills to the pronunciation of unfamiliar printed words, a reading-writing (Grw) ability. The initial items require the individual to produce the sounds for single letters. The remaining items require the person to read aloud letter combinations that are phonically consistent or are regular patterns in English orthography but are nonsense or low-frequency words. The items become more difficult as the complexity of the nonsense words increases.</li>
<li>'''Oral Reading''': Oral Reading is a measure of story reading accuracy and prosody, a reading-writing (Grw) ability. The individual reads aloud sentences that gradually increase in difficulty. Performance is scored for both accuracy and fluency of expression.</li>
<li>'''Sentence Reading Fluency''': Sentence Reading Fluency measures reading rate, requiring both reading-writing (Grw) and cognitive processing speed (Gs) abilities. The task involves reading simple sentences silently and quickly in the Response Booklet, deciding if the statement is true or false, and then circling Yes or No. The difficulty level of the sentences gradually increases to a moderate level. The individual attempts to complete as many items as possible within a 3-minute time limit.</li>
</ul>
TOWRE-2 <br>
The TOWRE-2 contains two subsets, each of which has four alternate forms.
<ul type="circle">
<li>'''The Sight Word Efficiency''' subtest assesses the number of real words printed in vertical lists that an individual can accurately identify within 45 seconds.</li>
<li>'''Phonemic Decoding Efficiency''' subtest measures the number of pronounceable nonwords presented in vertical lists than an individual can accurately decode within 45 seconds.</li>
</ul>
WASI <br>
<ul type="circle">
<li>'''Vocabulary''': The vocabulary subtest has 31 items, including 3 picture items and 28 verbal items. For picture items, the examinee names the object presented visually. For verbal items, the examinee defines words that are presented visually and orally. Vocabulary is designed to measure an examinee’s word knowledge and verbal concept formation. </li>
<li>'''Matrix Reasoning''': The Matrix Reasoning subtest has 30 items. The examinee views a series of incomplete matrices and completes each one by selecting the correct response option. The subtest taps … classification and spatial ability, knowledge of part-whole relationships, simultaneous processing, and perceptual organization. </li>
</ul>
CTOPP-2 <br>
The CTOPP-2 is multi-part test that measures phonological awareness, phonological memory, and rapid naming skills. We utilized the following tests:
<ul type="circle">
<li>'''Elision''': This 34-item subtest measures the extent to which an individual can say a word and then say what is left after dropping out designated sounds. For the first two items, the examiner says compound words and asks the examinee to say that word and then say the word that remains after dropping one of the compound words. For the remaining items, the individual listens to a word and repeats that word and then is asked to say the word without a specific sound. For example, the examinee is instructed, “Say bold.” After repeating “bold,” the examinee is told, “Now say ‘bold’ without saying /b/.” The correct response is “old.”</li>
<li>'''Memory for Digits''': This 28-item subtest measures the extent to which an individual can repeat a series of numbers ranging in length from two to eight digits. After the individual has listened to a series of audio-recorded numbers presented at a rate of 2 per second, he or she is asked to repeat the numbers in the same order in which they were heard.</li>
<li>'''Rapid Digit Naming''': This 36-item subtest measures the speed with which an individual can name numbers. The Picture Book contains one page for this subtest, which consists of four rows and nine columns of six randomly arranged numbers (i.e., 2, 3, 4, 5, 7, 8). The examinee is instructed to name the numbers on the top row from left to right, and then name the numbers on the next row from left to right, and so on, until all of the numbers have been named. The individual’s score is the total number of seconds taken to name all of the numbers on the page.</li>
<li>'''Rapid Letter Naming''': This 36-item subtest measures the speed with which an individual can name letters. The Picture Book contains one page for this subtest, which consists of four rows and nine columns of six randomly arranged letters (i.e., a, c, k, n, s, t). The examinee is instructed to name the letters on the top row from left to right, and then name the letters on the next row from left to right, and so on, until all of the letters have been named. The individual’s score is the total number of seconds taken to name all of the letters.</li>
</ul>
477k5bsztmryl50v2msxq4ctwc0f3jt
98
97
2015-10-28T22:37:07Z
Pdonnelly
2
wikitext
text/x-wiki
=Reading Battery=
We use a variety of standardized measures of reading aptitude as a part of our work in the BDE Lab. Below is a list of the measures we utilize, a description of their use, and details of their administration.
Woodcock-Johnson IV Tests of Achievement (WJ-IV)
Test of Word Reading Efficiency-2 (TOWRE-2)
Weschler Abbreviated Scale of Intelligence (WASI)
Comprehensive Test of Phonological Processing-2 (CTOPP-2)
=WJ-IV= <br>
The WJ-IV Tests of Achievement contains 20 tests measuring reading, mathematics, written language, and academic knowledge.
As a part of the study, only four tests were used, all focusing on basic reading skills and fluency.
<ul type="circle">
<li>'''Letter-Word Identification''':
Letter-Word Identification measures the examinee’s word identification skills, a reading-writing (Grw) ability. The initial items require the individual to identify letters that appear in large type on the examinee’s side of the Test Book. The remaining items require the person to read aloud individual words correctly. The examinee is not required to know the meaning of any word. The items become increasingly difficult as the selected words appear less frequently in written English. </li>
<li>'''Word Attack''': Word Attack measures a person’s ability to apply phonic and structural analysis skills to the pronunciation of unfamiliar printed words, a reading-writing (Grw) ability. The initial items require the individual to produce the sounds for single letters. The remaining items require the person to read aloud letter combinations that are phonically consistent or are regular patterns in English orthography but are nonsense or low-frequency words. The items become more difficult as the complexity of the nonsense words increases.</li>
<li>'''Oral Reading''': Oral Reading is a measure of story reading accuracy and prosody, a reading-writing (Grw) ability. The individual reads aloud sentences that gradually increase in difficulty. Performance is scored for both accuracy and fluency of expression.</li>
<li>'''Sentence Reading Fluency''': Sentence Reading Fluency measures reading rate, requiring both reading-writing (Grw) and cognitive processing speed (Gs) abilities. The task involves reading simple sentences silently and quickly in the Response Booklet, deciding if the statement is true or false, and then circling Yes or No. The difficulty level of the sentences gradually increases to a moderate level. The individual attempts to complete as many items as possible within a 3-minute time limit.</li>
</ul>
=TOWRE-2= <br>
The TOWRE-2 contains two subsets, each of which has four alternate forms.
<ul type="circle">
<li>'''The Sight Word Efficiency''' subtest assesses the number of real words printed in vertical lists that an individual can accurately identify within 45 seconds.</li>
<li>'''Phonemic Decoding Efficiency''' subtest measures the number of pronounceable nonwords presented in vertical lists than an individual can accurately decode within 45 seconds.</li>
</ul>
=WASI= <br>
<ul type="circle">
<li>'''Vocabulary''': The vocabulary subtest has 31 items, including 3 picture items and 28 verbal items. For picture items, the examinee names the object presented visually. For verbal items, the examinee defines words that are presented visually and orally. Vocabulary is designed to measure an examinee’s word knowledge and verbal concept formation. </li>
<li>'''Matrix Reasoning''': The Matrix Reasoning subtest has 30 items. The examinee views a series of incomplete matrices and completes each one by selecting the correct response option. The subtest taps … classification and spatial ability, knowledge of part-whole relationships, simultaneous processing, and perceptual organization. </li>
</ul>
=CTOPP-2= <br>
The CTOPP-2 is multi-part test that measures phonological awareness, phonological memory, and rapid naming skills. We utilized the following tests:
<ul type="circle">
<li>'''Elision''': This 34-item subtest measures the extent to which an individual can say a word and then say what is left after dropping out designated sounds. For the first two items, the examiner says compound words and asks the examinee to say that word and then say the word that remains after dropping one of the compound words. For the remaining items, the individual listens to a word and repeats that word and then is asked to say the word without a specific sound. For example, the examinee is instructed, “Say bold.” After repeating “bold,” the examinee is told, “Now say ‘bold’ without saying /b/.” The correct response is “old.”</li>
<li>'''Memory for Digits''': This 28-item subtest measures the extent to which an individual can repeat a series of numbers ranging in length from two to eight digits. After the individual has listened to a series of audio-recorded numbers presented at a rate of 2 per second, he or she is asked to repeat the numbers in the same order in which they were heard.</li>
<li>'''Rapid Digit Naming''': This 36-item subtest measures the speed with which an individual can name numbers. The Picture Book contains one page for this subtest, which consists of four rows and nine columns of six randomly arranged numbers (i.e., 2, 3, 4, 5, 7, 8). The examinee is instructed to name the numbers on the top row from left to right, and then name the numbers on the next row from left to right, and so on, until all of the numbers have been named. The individual’s score is the total number of seconds taken to name all of the numbers on the page.</li>
<li>'''Rapid Letter Naming''': This 36-item subtest measures the speed with which an individual can name letters. The Picture Book contains one page for this subtest, which consists of four rows and nine columns of six randomly arranged letters (i.e., a, c, k, n, s, t). The examinee is instructed to name the letters on the top row from left to right, and then name the letters on the next row from left to right, and so on, until all of the letters have been named. The individual’s score is the total number of seconds taken to name all of the letters.</li>
</ul>
h77xnjeu43ikigq7r3sf2rl5ja6z6vr
99
98
2015-10-28T22:37:39Z
Pdonnelly
2
wikitext
text/x-wiki
=Reading Battery=
We use a variety of standardized measures of reading aptitude as a part of our work in the BDE Lab. Below is a list of the measures we utilize, a description of their use, and details of their administration.
Woodcock-Johnson IV Tests of Achievement (WJ-IV)
Test of Word Reading Efficiency-2 (TOWRE-2)
Weschler Abbreviated Scale of Intelligence (WASI)
Comprehensive Test of Phonological Processing-2 (CTOPP-2)
=WJ-IV=
The WJ-IV Tests of Achievement contains 20 tests measuring reading, mathematics, written language, and academic knowledge.
As a part of the study, only four tests were used, all focusing on basic reading skills and fluency.
<ul type="circle">
<li>'''Letter-Word Identification''':
Letter-Word Identification measures the examinee’s word identification skills, a reading-writing (Grw) ability. The initial items require the individual to identify letters that appear in large type on the examinee’s side of the Test Book. The remaining items require the person to read aloud individual words correctly. The examinee is not required to know the meaning of any word. The items become increasingly difficult as the selected words appear less frequently in written English. </li>
<li>'''Word Attack''': Word Attack measures a person’s ability to apply phonic and structural analysis skills to the pronunciation of unfamiliar printed words, a reading-writing (Grw) ability. The initial items require the individual to produce the sounds for single letters. The remaining items require the person to read aloud letter combinations that are phonically consistent or are regular patterns in English orthography but are nonsense or low-frequency words. The items become more difficult as the complexity of the nonsense words increases.</li>
<li>'''Oral Reading''': Oral Reading is a measure of story reading accuracy and prosody, a reading-writing (Grw) ability. The individual reads aloud sentences that gradually increase in difficulty. Performance is scored for both accuracy and fluency of expression.</li>
<li>'''Sentence Reading Fluency''': Sentence Reading Fluency measures reading rate, requiring both reading-writing (Grw) and cognitive processing speed (Gs) abilities. The task involves reading simple sentences silently and quickly in the Response Booklet, deciding if the statement is true or false, and then circling Yes or No. The difficulty level of the sentences gradually increases to a moderate level. The individual attempts to complete as many items as possible within a 3-minute time limit.</li>
</ul>
=TOWRE-2=
The TOWRE-2 contains two subsets, each of which has four alternate forms.
<ul type="circle">
<li>'''The Sight Word Efficiency''' subtest assesses the number of real words printed in vertical lists that an individual can accurately identify within 45 seconds.</li>
<li>'''Phonemic Decoding Efficiency''' subtest measures the number of pronounceable nonwords presented in vertical lists than an individual can accurately decode within 45 seconds.</li>
</ul>
=WASI=
<ul type="circle">
<li>'''Vocabulary''': The vocabulary subtest has 31 items, including 3 picture items and 28 verbal items. For picture items, the examinee names the object presented visually. For verbal items, the examinee defines words that are presented visually and orally. Vocabulary is designed to measure an examinee’s word knowledge and verbal concept formation. </li>
<li>'''Matrix Reasoning''': The Matrix Reasoning subtest has 30 items. The examinee views a series of incomplete matrices and completes each one by selecting the correct response option. The subtest taps … classification and spatial ability, knowledge of part-whole relationships, simultaneous processing, and perceptual organization. </li>
</ul>
=CTOPP-2=
The CTOPP-2 is multi-part test that measures phonological awareness, phonological memory, and rapid naming skills. We utilized the following tests:
<ul type="circle">
<li>'''Elision''': This 34-item subtest measures the extent to which an individual can say a word and then say what is left after dropping out designated sounds. For the first two items, the examiner says compound words and asks the examinee to say that word and then say the word that remains after dropping one of the compound words. For the remaining items, the individual listens to a word and repeats that word and then is asked to say the word without a specific sound. For example, the examinee is instructed, “Say bold.” After repeating “bold,” the examinee is told, “Now say ‘bold’ without saying /b/.” The correct response is “old.”</li>
<li>'''Memory for Digits''': This 28-item subtest measures the extent to which an individual can repeat a series of numbers ranging in length from two to eight digits. After the individual has listened to a series of audio-recorded numbers presented at a rate of 2 per second, he or she is asked to repeat the numbers in the same order in which they were heard.</li>
<li>'''Rapid Digit Naming''': This 36-item subtest measures the speed with which an individual can name numbers. The Picture Book contains one page for this subtest, which consists of four rows and nine columns of six randomly arranged numbers (i.e., 2, 3, 4, 5, 7, 8). The examinee is instructed to name the numbers on the top row from left to right, and then name the numbers on the next row from left to right, and so on, until all of the numbers have been named. The individual’s score is the total number of seconds taken to name all of the numbers on the page.</li>
<li>'''Rapid Letter Naming''': This 36-item subtest measures the speed with which an individual can name letters. The Picture Book contains one page for this subtest, which consists of four rows and nine columns of six randomly arranged letters (i.e., a, c, k, n, s, t). The examinee is instructed to name the letters on the top row from left to right, and then name the letters on the next row from left to right, and so on, until all of the letters have been named. The individual’s score is the total number of seconds taken to name all of the letters.</li>
</ul>
mfct9e6hu6lf2oq0az8gd8wrnf1kizh
100
99
2015-10-28T22:39:34Z
Pdonnelly
2
wikitext
text/x-wiki
=Reading Battery=
We use a variety of standardized measures of reading aptitude as a part of our work in the BDE Lab. Below is a list of the measures we utilize, a description of their use, and details of their administration.
Woodcock-Johnson IV Tests of Achievement (WJ-IV)
Test of Word Reading Efficiency-2 (TOWRE-2)
Weschler Abbreviated Scale of Intelligence (WASI)
Comprehensive Test of Phonological Processing-2 (CTOPP-2)
=WJ-IV=
The [WJ-IV Tests of Achievement] contains 20 tests measuring reading, mathematics, written language, and academic knowledge.
As a part of the study, only four tests were used, all focusing on basic reading skills and fluency.
<ul type="circle">
<li>'''Letter-Word Identification''':
Letter-Word Identification measures the examinee’s word identification skills, a reading-writing (Grw) ability. The initial items require the individual to identify letters that appear in large type on the examinee’s side of the Test Book. The remaining items require the person to read aloud individual words correctly. The examinee is not required to know the meaning of any word. The items become increasingly difficult as the selected words appear less frequently in written English. </li>
<li>'''Word Attack''': Word Attack measures a person’s ability to apply phonic and structural analysis skills to the pronunciation of unfamiliar printed words, a reading-writing (Grw) ability. The initial items require the individual to produce the sounds for single letters. The remaining items require the person to read aloud letter combinations that are phonically consistent or are regular patterns in English orthography but are nonsense or low-frequency words. The items become more difficult as the complexity of the nonsense words increases.</li>
<li>'''Oral Reading''': Oral Reading is a measure of story reading accuracy and prosody, a reading-writing (Grw) ability. The individual reads aloud sentences that gradually increase in difficulty. Performance is scored for both accuracy and fluency of expression.</li>
<li>'''Sentence Reading Fluency''': Sentence Reading Fluency measures reading rate, requiring both reading-writing (Grw) and cognitive processing speed (Gs) abilities. The task involves reading simple sentences silently and quickly in the Response Booklet, deciding if the statement is true or false, and then circling Yes or No. The difficulty level of the sentences gradually increases to a moderate level. The individual attempts to complete as many items as possible within a 3-minute time limit.</li>
</ul>
=TOWRE-2=
The TOWRE-2 contains two subsets, each of which has four alternate forms.
<ul type="circle">
<li>'''The Sight Word Efficiency''' subtest assesses the number of real words printed in vertical lists that an individual can accurately identify within 45 seconds.</li>
<li>'''Phonemic Decoding Efficiency''' subtest measures the number of pronounceable nonwords presented in vertical lists than an individual can accurately decode within 45 seconds.</li>
</ul>
=WASI=
The WASI is a shortened test of general cognitive ability, both verbal and non-verbal.
<ul type="circle">
<li>'''Vocabulary''': The vocabulary subtest has 31 items, including 3 picture items and 28 verbal items. For picture items, the examinee names the object presented visually. For verbal items, the examinee defines words that are presented visually and orally. Vocabulary is designed to measure an examinee’s word knowledge and verbal concept formation. </li>
<li>'''Matrix Reasoning''': The Matrix Reasoning subtest has 30 items. The examinee views a series of incomplete matrices and completes each one by selecting the correct response option. The subtest taps … classification and spatial ability, knowledge of part-whole relationships, simultaneous processing, and perceptual organization. </li>
</ul>
=CTOPP-2=
The CTOPP-2 is multi-part test that measures phonological awareness, phonological memory, and rapid naming skills. We utilized the following tests:
<ul type="circle">
<li>'''Elision''': This 34-item subtest measures the extent to which an individual can say a word and then say what is left after dropping out designated sounds. For the first two items, the examiner says compound words and asks the examinee to say that word and then say the word that remains after dropping one of the compound words. For the remaining items, the individual listens to a word and repeats that word and then is asked to say the word without a specific sound. For example, the examinee is instructed, “Say bold.” After repeating “bold,” the examinee is told, “Now say ‘bold’ without saying /b/.” The correct response is “old.”</li>
<li>'''Memory for Digits''': This 28-item subtest measures the extent to which an individual can repeat a series of numbers ranging in length from two to eight digits. After the individual has listened to a series of audio-recorded numbers presented at a rate of 2 per second, he or she is asked to repeat the numbers in the same order in which they were heard.</li>
<li>'''Rapid Digit Naming''': This 36-item subtest measures the speed with which an individual can name numbers. The Picture Book contains one page for this subtest, which consists of four rows and nine columns of six randomly arranged numbers (i.e., 2, 3, 4, 5, 7, 8). The examinee is instructed to name the numbers on the top row from left to right, and then name the numbers on the next row from left to right, and so on, until all of the numbers have been named. The individual’s score is the total number of seconds taken to name all of the numbers on the page.</li>
<li>'''Rapid Letter Naming''': This 36-item subtest measures the speed with which an individual can name letters. The Picture Book contains one page for this subtest, which consists of four rows and nine columns of six randomly arranged letters (i.e., a, c, k, n, s, t). The examinee is instructed to name the letters on the top row from left to right, and then name the letters on the next row from left to right, and so on, until all of the letters have been named. The individual’s score is the total number of seconds taken to name all of the letters.</li>
</ul>
19mm6ahznmlhlpyrlmmpgn455w5vs0r
101
100
2015-10-28T22:45:47Z
Pdonnelly
2
wikitext
text/x-wiki
=Reading Battery=
We use a variety of standardized measures of reading aptitude as a part of our work in the BDE Lab. Below is a list of the measures we utilize, a description of their use, and details of their administration.
Woodcock-Johnson IV Tests of Achievement (WJ-IV)
Test of Word Reading Efficiency-2 (TOWRE-2)
Weschler Abbreviated Scale of Intelligence (WASI)
Comprehensive Test of Phonological Processing-2 (CTOPP-2)
=WJ-IV=
The <a href="http://www.riversidepublishing.com/products/wj-iv/index.html">WJ-IV Tests of Achievement</a> contains 20 tests measuring reading, mathematics, written language, and academic knowledge.
As a part of the study, only four tests were used, all focusing on basic reading skills and fluency.
<ul type="circle">
<li>'''Letter-Word Identification''':
Letter-Word Identification measures the examinee’s word identification skills, a reading-writing (Grw) ability. The initial items require the individual to identify letters that appear in large type on the examinee’s side of the Test Book. The remaining items require the person to read aloud individual words correctly. The examinee is not required to know the meaning of any word. The items become increasingly difficult as the selected words appear less frequently in written English. </li>
<li>'''Word Attack''': Word Attack measures a person’s ability to apply phonic and structural analysis skills to the pronunciation of unfamiliar printed words, a reading-writing (Grw) ability. The initial items require the individual to produce the sounds for single letters. The remaining items require the person to read aloud letter combinations that are phonically consistent or are regular patterns in English orthography but are nonsense or low-frequency words. The items become more difficult as the complexity of the nonsense words increases.</li>
<li>'''Oral Reading''': Oral Reading is a measure of story reading accuracy and prosody, a reading-writing (Grw) ability. The individual reads aloud sentences that gradually increase in difficulty. Performance is scored for both accuracy and fluency of expression.</li>
<li>'''Sentence Reading Fluency''': Sentence Reading Fluency measures reading rate, requiring both reading-writing (Grw) and cognitive processing speed (Gs) abilities. The task involves reading simple sentences silently and quickly in the Response Booklet, deciding if the statement is true or false, and then circling Yes or No. The difficulty level of the sentences gradually increases to a moderate level. The individual attempts to complete as many items as possible within a 3-minute time limit.</li>
</ul>
=TOWRE-2=
The TOWRE-2 contains two subsets, each of which has four alternate forms.
<ul type="circle">
<li>'''The Sight Word Efficiency''' subtest assesses the number of real words printed in vertical lists that an individual can accurately identify within 45 seconds.</li>
<li>'''Phonemic Decoding Efficiency''' subtest measures the number of pronounceable nonwords presented in vertical lists than an individual can accurately decode within 45 seconds.</li>
</ul>
=WASI=
The WASI is a shortened test of general cognitive ability, both verbal and non-verbal.
<ul type="circle">
<li>'''Vocabulary''': The vocabulary subtest has 31 items, including 3 picture items and 28 verbal items. For picture items, the examinee names the object presented visually. For verbal items, the examinee defines words that are presented visually and orally. Vocabulary is designed to measure an examinee’s word knowledge and verbal concept formation. </li>
<li>'''Matrix Reasoning''': The Matrix Reasoning subtest has 30 items. The examinee views a series of incomplete matrices and completes each one by selecting the correct response option. The subtest taps … classification and spatial ability, knowledge of part-whole relationships, simultaneous processing, and perceptual organization. </li>
</ul>
=CTOPP-2=
The CTOPP-2 is multi-part test that measures phonological awareness, phonological memory, and rapid naming skills. We utilized the following tests:
<ul type="circle">
<li>'''Elision''': This 34-item subtest measures the extent to which an individual can say a word and then say what is left after dropping out designated sounds. For the first two items, the examiner says compound words and asks the examinee to say that word and then say the word that remains after dropping one of the compound words. For the remaining items, the individual listens to a word and repeats that word and then is asked to say the word without a specific sound. For example, the examinee is instructed, “Say bold.” After repeating “bold,” the examinee is told, “Now say ‘bold’ without saying /b/.” The correct response is “old.”</li>
<li>'''Memory for Digits''': This 28-item subtest measures the extent to which an individual can repeat a series of numbers ranging in length from two to eight digits. After the individual has listened to a series of audio-recorded numbers presented at a rate of 2 per second, he or she is asked to repeat the numbers in the same order in which they were heard.</li>
<li>'''Rapid Digit Naming''': This 36-item subtest measures the speed with which an individual can name numbers. The Picture Book contains one page for this subtest, which consists of four rows and nine columns of six randomly arranged numbers (i.e., 2, 3, 4, 5, 7, 8). The examinee is instructed to name the numbers on the top row from left to right, and then name the numbers on the next row from left to right, and so on, until all of the numbers have been named. The individual’s score is the total number of seconds taken to name all of the numbers on the page.</li>
<li>'''Rapid Letter Naming''': This 36-item subtest measures the speed with which an individual can name letters. The Picture Book contains one page for this subtest, which consists of four rows and nine columns of six randomly arranged letters (i.e., a, c, k, n, s, t). The examinee is instructed to name the letters on the top row from left to right, and then name the letters on the next row from left to right, and so on, until all of the letters have been named. The individual’s score is the total number of seconds taken to name all of the letters.</li>
</ul>
h4u9xi10hm38dld5jf53949by5dykz3
102
101
2015-10-28T22:49:39Z
Pdonnelly
2
wikitext
text/x-wiki
=Reading Battery=
We use a variety of standardized measures of reading aptitude as a part of our work in the BDE Lab. Below is a list of the measures we utilize, a description of their use, and details of their administration.
Woodcock-Johnson IV Tests of Achievement (WJ-IV)
Test of Word Reading Efficiency-2 (TOWRE-2)
Weschler Abbreviated Scale of Intelligence (WASI)
Comprehensive Test of Phonological Processing-2 (CTOPP-2)
=WJ-IV=
The [http://www.riversidepublishing.com/products/wj-iv/index.html WJ-IV Tests of Achievement] contains 20 tests measuring reading, mathematics, written language, and academic knowledge.
As a part of the study, only four tests were used, all focusing on basic reading skills and fluency.
<ul type="circle">
<li>'''Letter-Word Identification''':
Letter-Word Identification measures the examinee’s word identification skills, a reading-writing (Grw) ability. The initial items require the individual to identify letters that appear in large type on the examinee’s side of the Test Book. The remaining items require the person to read aloud individual words correctly. The examinee is not required to know the meaning of any word. The items become increasingly difficult as the selected words appear less frequently in written English. </li>
<li>'''Word Attack''': Word Attack measures a person’s ability to apply phonic and structural analysis skills to the pronunciation of unfamiliar printed words, a reading-writing (Grw) ability. The initial items require the individual to produce the sounds for single letters. The remaining items require the person to read aloud letter combinations that are phonically consistent or are regular patterns in English orthography but are nonsense or low-frequency words. The items become more difficult as the complexity of the nonsense words increases.</li>
<li>'''Oral Reading''': Oral Reading is a measure of story reading accuracy and prosody, a reading-writing (Grw) ability. The individual reads aloud sentences that gradually increase in difficulty. Performance is scored for both accuracy and fluency of expression.</li>
<li>'''Sentence Reading Fluency''': Sentence Reading Fluency measures reading rate, requiring both reading-writing (Grw) and cognitive processing speed (Gs) abilities. The task involves reading simple sentences silently and quickly in the Response Booklet, deciding if the statement is true or false, and then circling Yes or No. The difficulty level of the sentences gradually increases to a moderate level. The individual attempts to complete as many items as possible within a 3-minute time limit.</li>
</ul>
=TOWRE-2=
The TOWRE-2 contains two subsets, each of which has four alternate forms.
<ul type="circle">
<li>'''The Sight Word Efficiency''' subtest assesses the number of real words printed in vertical lists that an individual can accurately identify within 45 seconds.</li>
<li>'''Phonemic Decoding Efficiency''' subtest measures the number of pronounceable nonwords presented in vertical lists than an individual can accurately decode within 45 seconds.</li>
</ul>
=WASI=
The WASI is a shortened test of general cognitive ability, both verbal and non-verbal.
<ul type="circle">
<li>'''Vocabulary''': The vocabulary subtest has 31 items, including 3 picture items and 28 verbal items. For picture items, the examinee names the object presented visually. For verbal items, the examinee defines words that are presented visually and orally. Vocabulary is designed to measure an examinee’s word knowledge and verbal concept formation. </li>
<li>'''Matrix Reasoning''': The Matrix Reasoning subtest has 30 items. The examinee views a series of incomplete matrices and completes each one by selecting the correct response option. The subtest taps … classification and spatial ability, knowledge of part-whole relationships, simultaneous processing, and perceptual organization. </li>
</ul>
=CTOPP-2=
The CTOPP-2 is multi-part test that measures phonological awareness, phonological memory, and rapid naming skills. We utilized the following tests:
<ul type="circle">
<li>'''Elision''': This 34-item subtest measures the extent to which an individual can say a word and then say what is left after dropping out designated sounds. For the first two items, the examiner says compound words and asks the examinee to say that word and then say the word that remains after dropping one of the compound words. For the remaining items, the individual listens to a word and repeats that word and then is asked to say the word without a specific sound. For example, the examinee is instructed, “Say bold.” After repeating “bold,” the examinee is told, “Now say ‘bold’ without saying /b/.” The correct response is “old.”</li>
<li>'''Memory for Digits''': This 28-item subtest measures the extent to which an individual can repeat a series of numbers ranging in length from two to eight digits. After the individual has listened to a series of audio-recorded numbers presented at a rate of 2 per second, he or she is asked to repeat the numbers in the same order in which they were heard.</li>
<li>'''Rapid Digit Naming''': This 36-item subtest measures the speed with which an individual can name numbers. The Picture Book contains one page for this subtest, which consists of four rows and nine columns of six randomly arranged numbers (i.e., 2, 3, 4, 5, 7, 8). The examinee is instructed to name the numbers on the top row from left to right, and then name the numbers on the next row from left to right, and so on, until all of the numbers have been named. The individual’s score is the total number of seconds taken to name all of the numbers on the page.</li>
<li>'''Rapid Letter Naming''': This 36-item subtest measures the speed with which an individual can name letters. The Picture Book contains one page for this subtest, which consists of four rows and nine columns of six randomly arranged letters (i.e., a, c, k, n, s, t). The examinee is instructed to name the letters on the top row from left to right, and then name the letters on the next row from left to right, and so on, until all of the letters have been named. The individual’s score is the total number of seconds taken to name all of the letters.</li>
</ul>
mbcqnwpahz821o2g6w1pwjelff8hz42
103
102
2015-10-28T22:51:15Z
Pdonnelly
2
wikitext
text/x-wiki
=Reading Battery=
We use a variety of standardized measures of reading aptitude as a part of our work in the BDE Lab. Below is a list of the measures we utilize, a description of their use, and details of their administration.
Woodcock-Johnson IV Tests of Achievement (WJ-IV)
Test of Word Reading Efficiency-2 (TOWRE-2)
Weschler Abbreviated Scale of Intelligence (WASI)
Comprehensive Test of Phonological Processing-2 (CTOPP-2)
=WJ-IV=
The [http://www.riversidepublishing.com/products/wj-iv/index.html WJ-IV Tests of Achievement] contains 20 tests measuring reading, mathematics, written language, and academic knowledge.
As a part of the study, only four tests were used, all focusing on basic reading skills and fluency.
<ul type="circle">
<li>'''Letter-Word Identification''':
Letter-Word Identification measures the examinee’s word identification skills, a reading-writing (Grw) ability. The initial items require the individual to identify letters that appear in large type on the examinee’s side of the Test Book. The remaining items require the person to read aloud individual words correctly. The examinee is not required to know the meaning of any word. The items become increasingly difficult as the selected words appear less frequently in written English. </li>
<li>'''Word Attack''': Word Attack measures a person’s ability to apply phonic and structural analysis skills to the pronunciation of unfamiliar printed words, a reading-writing (Grw) ability. The initial items require the individual to produce the sounds for single letters. The remaining items require the person to read aloud letter combinations that are phonically consistent or are regular patterns in English orthography but are nonsense or low-frequency words. The items become more difficult as the complexity of the nonsense words increases.</li>
<li>'''Oral Reading''': Oral Reading is a measure of story reading accuracy and prosody, a reading-writing (Grw) ability. The individual reads aloud sentences that gradually increase in difficulty. Performance is scored for both accuracy and fluency of expression.</li>
<li>'''Sentence Reading Fluency''': Sentence Reading Fluency measures reading rate, requiring both reading-writing (Grw) and cognitive processing speed (Gs) abilities. The task involves reading simple sentences silently and quickly in the Response Booklet, deciding if the statement is true or false, and then circling Yes or No. The difficulty level of the sentences gradually increases to a moderate level. The individual attempts to complete as many items as possible within a 3-minute time limit.</li>
</ul>
=TOWRE-2=
The [http://www.proedinc.com/customer/productview.aspx?id=5074 TOWRE-2] contains two subsets, each of which has four alternate forms.
<ul type="circle">
<li>'''The Sight Word Efficiency''' subtest assesses the number of real words printed in vertical lists that an individual can accurately identify within 45 seconds.</li>
<li>'''Phonemic Decoding Efficiency''' subtest measures the number of pronounceable nonwords presented in vertical lists than an individual can accurately decode within 45 seconds.</li>
</ul>
=WASI=
The [http://www.pearsonclinical.com/psychology/products/100000037/wechsler-abbreviated-scale-of-intelligence--second-edition-wasi-ii.html#tab-details WASI] is a shortened test of general cognitive ability, both verbal and non-verbal.
<ul type="circle">
<li>'''Vocabulary''': The vocabulary subtest has 31 items, including 3 picture items and 28 verbal items. For picture items, the examinee names the object presented visually. For verbal items, the examinee defines words that are presented visually and orally. Vocabulary is designed to measure an examinee’s word knowledge and verbal concept formation. </li>
<li>'''Matrix Reasoning''': The Matrix Reasoning subtest has 30 items. The examinee views a series of incomplete matrices and completes each one by selecting the correct response option. The subtest taps … classification and spatial ability, knowledge of part-whole relationships, simultaneous processing, and perceptual organization. </li>
</ul>
=CTOPP-2=
The [http://www.proedinc.com/customer/productview.aspx?id=5187 CTOPP-2] is multi-part test that measures phonological awareness, phonological memory, and rapid naming skills. We utilized the following tests:
<ul type="circle">
<li>'''Elision''': This 34-item subtest measures the extent to which an individual can say a word and then say what is left after dropping out designated sounds. For the first two items, the examiner says compound words and asks the examinee to say that word and then say the word that remains after dropping one of the compound words. For the remaining items, the individual listens to a word and repeats that word and then is asked to say the word without a specific sound. For example, the examinee is instructed, “Say bold.” After repeating “bold,” the examinee is told, “Now say ‘bold’ without saying /b/.” The correct response is “old.”</li>
<li>'''Memory for Digits''': This 28-item subtest measures the extent to which an individual can repeat a series of numbers ranging in length from two to eight digits. After the individual has listened to a series of audio-recorded numbers presented at a rate of 2 per second, he or she is asked to repeat the numbers in the same order in which they were heard.</li>
<li>'''Rapid Digit Naming''': This 36-item subtest measures the speed with which an individual can name numbers. The Picture Book contains one page for this subtest, which consists of four rows and nine columns of six randomly arranged numbers (i.e., 2, 3, 4, 5, 7, 8). The examinee is instructed to name the numbers on the top row from left to right, and then name the numbers on the next row from left to right, and so on, until all of the numbers have been named. The individual’s score is the total number of seconds taken to name all of the numbers on the page.</li>
<li>'''Rapid Letter Naming''': This 36-item subtest measures the speed with which an individual can name letters. The Picture Book contains one page for this subtest, which consists of four rows and nine columns of six randomly arranged letters (i.e., a, c, k, n, s, t). The examinee is instructed to name the letters on the top row from left to right, and then name the letters on the next row from left to right, and so on, until all of the letters have been named. The individual’s score is the total number of seconds taken to name all of the letters.</li>
</ul>
5v8zxzlzbwj7nj6xoxo3e75d0o810yv
104
103
2015-10-28T22:51:53Z
Pdonnelly
2
wikitext
text/x-wiki
=Reading Battery=
We use a variety of standardized measures of reading aptitude as a part of our work in the BDE Lab. Below is a list of the measures we utilize, a description of their use, and details of their administration.
Woodcock-Johnson IV Tests of Achievement (WJ-IV)
Test of Word Reading Efficiency-2 (TOWRE-2)
Weschler Abbreviated Scale of Intelligence (WASI)
Comprehensive Test of Phonological Processing-2 (CTOPP-2)
==WJ-IV==
The [http://www.riversidepublishing.com/products/wj-iv/index.html WJ-IV Tests of Achievement] contains 20 tests measuring reading, mathematics, written language, and academic knowledge.
As a part of the study, only four tests were used, all focusing on basic reading skills and fluency.
<ul type="circle">
<li>'''Letter-Word Identification''':
Letter-Word Identification measures the examinee’s word identification skills, a reading-writing (Grw) ability. The initial items require the individual to identify letters that appear in large type on the examinee’s side of the Test Book. The remaining items require the person to read aloud individual words correctly. The examinee is not required to know the meaning of any word. The items become increasingly difficult as the selected words appear less frequently in written English. </li>
<li>'''Word Attack''': Word Attack measures a person’s ability to apply phonic and structural analysis skills to the pronunciation of unfamiliar printed words, a reading-writing (Grw) ability. The initial items require the individual to produce the sounds for single letters. The remaining items require the person to read aloud letter combinations that are phonically consistent or are regular patterns in English orthography but are nonsense or low-frequency words. The items become more difficult as the complexity of the nonsense words increases.</li>
<li>'''Oral Reading''': Oral Reading is a measure of story reading accuracy and prosody, a reading-writing (Grw) ability. The individual reads aloud sentences that gradually increase in difficulty. Performance is scored for both accuracy and fluency of expression.</li>
<li>'''Sentence Reading Fluency''': Sentence Reading Fluency measures reading rate, requiring both reading-writing (Grw) and cognitive processing speed (Gs) abilities. The task involves reading simple sentences silently and quickly in the Response Booklet, deciding if the statement is true or false, and then circling Yes or No. The difficulty level of the sentences gradually increases to a moderate level. The individual attempts to complete as many items as possible within a 3-minute time limit.</li>
</ul>
==TOWRE-2==
The [http://www.proedinc.com/customer/productview.aspx?id=5074 TOWRE-2] contains two subsets, each of which has four alternate forms.
<ul type="circle">
<li>'''The Sight Word Efficiency''' subtest assesses the number of real words printed in vertical lists that an individual can accurately identify within 45 seconds.</li>
<li>'''Phonemic Decoding Efficiency''' subtest measures the number of pronounceable nonwords presented in vertical lists than an individual can accurately decode within 45 seconds.</li>
</ul>
==WASI==
The [http://www.pearsonclinical.com/psychology/products/100000037/wechsler-abbreviated-scale-of-intelligence--second-edition-wasi-ii.html#tab-details WASI] is a shortened test of general cognitive ability, both verbal and non-verbal.
<ul type="circle">
<li>'''Vocabulary''': The vocabulary subtest has 31 items, including 3 picture items and 28 verbal items. For picture items, the examinee names the object presented visually. For verbal items, the examinee defines words that are presented visually and orally. Vocabulary is designed to measure an examinee’s word knowledge and verbal concept formation. </li>
<li>'''Matrix Reasoning''': The Matrix Reasoning subtest has 30 items. The examinee views a series of incomplete matrices and completes each one by selecting the correct response option. The subtest taps … classification and spatial ability, knowledge of part-whole relationships, simultaneous processing, and perceptual organization. </li>
</ul>
==CTOPP-2==
The [http://www.proedinc.com/customer/productview.aspx?id=5187 CTOPP-2] is multi-part test that measures phonological awareness, phonological memory, and rapid naming skills. We utilized the following tests:
<ul type="circle">
<li>'''Elision''': This 34-item subtest measures the extent to which an individual can say a word and then say what is left after dropping out designated sounds. For the first two items, the examiner says compound words and asks the examinee to say that word and then say the word that remains after dropping one of the compound words. For the remaining items, the individual listens to a word and repeats that word and then is asked to say the word without a specific sound. For example, the examinee is instructed, “Say bold.” After repeating “bold,” the examinee is told, “Now say ‘bold’ without saying /b/.” The correct response is “old.”</li>
<li>'''Memory for Digits''': This 28-item subtest measures the extent to which an individual can repeat a series of numbers ranging in length from two to eight digits. After the individual has listened to a series of audio-recorded numbers presented at a rate of 2 per second, he or she is asked to repeat the numbers in the same order in which they were heard.</li>
<li>'''Rapid Digit Naming''': This 36-item subtest measures the speed with which an individual can name numbers. The Picture Book contains one page for this subtest, which consists of four rows and nine columns of six randomly arranged numbers (i.e., 2, 3, 4, 5, 7, 8). The examinee is instructed to name the numbers on the top row from left to right, and then name the numbers on the next row from left to right, and so on, until all of the numbers have been named. The individual’s score is the total number of seconds taken to name all of the numbers on the page.</li>
<li>'''Rapid Letter Naming''': This 36-item subtest measures the speed with which an individual can name letters. The Picture Book contains one page for this subtest, which consists of four rows and nine columns of six randomly arranged letters (i.e., a, c, k, n, s, t). The examinee is instructed to name the letters on the top row from left to right, and then name the letters on the next row from left to right, and so on, until all of the letters have been named. The individual’s score is the total number of seconds taken to name all of the letters.</li>
</ul>
dlrhpi2cmp4aeraxy6gkp9oiqpavyg2
105
104
2015-10-28T23:01:15Z
Pdonnelly
2
/* WJ-IV */
wikitext
text/x-wiki
=Reading Battery=
We use a variety of standardized measures of reading aptitude as a part of our work in the BDE Lab. Below is a list of the measures we utilize, a description of their use, and details of their administration.
Woodcock-Johnson IV Tests of Achievement (WJ-IV)
Test of Word Reading Efficiency-2 (TOWRE-2)
Weschler Abbreviated Scale of Intelligence (WASI)
Comprehensive Test of Phonological Processing-2 (CTOPP-2)
==WJ-IV==
The [http://www.riversidepublishing.com/products/wj-iv/index.html WJ-IV Tests of Achievement] contains 20 tests measuring reading, mathematics, written language, and academic knowledge.
As a part of the study, only four tests were used, all focusing on basic reading skills and fluency.
<ul type="circle">
<li>'''Letter-Word Identification''':
Letter-Word Identification measures the examinee’s word identification skills, a reading-writing (Grw) ability. The initial items require the individual to identify letters that appear in large type on the examinee’s side of the Test Book. The remaining items require the person to read aloud individual words correctly. The examinee is not required to know the meaning of any word. The items become increasingly difficult as the selected words appear less frequently in written English. </li>
<li>'''Word Attack''': Word Attack measures a person’s ability to apply phonic and structural analysis skills to the pronunciation of unfamiliar printed words, a reading-writing (Grw) ability. The initial items require the individual to produce the sounds for single letters. The remaining items require the person to read aloud letter combinations that are phonically consistent or are regular patterns in English orthography but are nonsense or low-frequency words. The items become more difficult as the complexity of the nonsense words increases.</li>
<li>'''Oral Reading''': Oral Reading is a measure of story reading accuracy and prosody, a reading-writing (Grw) ability. The individual reads aloud sentences that gradually increase in difficulty. Performance is scored for both accuracy and fluency of expression.</li>
<li>'''Sentence Reading Fluency''': Sentence Reading Fluency measures reading rate, requiring both reading-writing (Grw) and cognitive processing speed (Gs) abilities. The task involves reading simple sentences silently and quickly in the Response Booklet, deciding if the statement is true or false, and then circling Yes or No. The difficulty level of the sentences gradually increases to a moderate level. The individual attempts to complete as many items as possible within a 3-minute time limit.</li>
</ul>
Administration: <br>
Materials needed: <br>
# Examiner Test Record
# Response Booklet
# Test Booklet
# Audio Recorder
# Pencil
==TOWRE-2==
The [http://www.proedinc.com/customer/productview.aspx?id=5074 TOWRE-2] contains two subsets, each of which has four alternate forms.
<ul type="circle">
<li>'''The Sight Word Efficiency''' subtest assesses the number of real words printed in vertical lists that an individual can accurately identify within 45 seconds.</li>
<li>'''Phonemic Decoding Efficiency''' subtest measures the number of pronounceable nonwords presented in vertical lists than an individual can accurately decode within 45 seconds.</li>
</ul>
==WASI==
The [http://www.pearsonclinical.com/psychology/products/100000037/wechsler-abbreviated-scale-of-intelligence--second-edition-wasi-ii.html#tab-details WASI] is a shortened test of general cognitive ability, both verbal and non-verbal.
<ul type="circle">
<li>'''Vocabulary''': The vocabulary subtest has 31 items, including 3 picture items and 28 verbal items. For picture items, the examinee names the object presented visually. For verbal items, the examinee defines words that are presented visually and orally. Vocabulary is designed to measure an examinee’s word knowledge and verbal concept formation. </li>
<li>'''Matrix Reasoning''': The Matrix Reasoning subtest has 30 items. The examinee views a series of incomplete matrices and completes each one by selecting the correct response option. The subtest taps … classification and spatial ability, knowledge of part-whole relationships, simultaneous processing, and perceptual organization. </li>
</ul>
==CTOPP-2==
The [http://www.proedinc.com/customer/productview.aspx?id=5187 CTOPP-2] is multi-part test that measures phonological awareness, phonological memory, and rapid naming skills. We utilized the following tests:
<ul type="circle">
<li>'''Elision''': This 34-item subtest measures the extent to which an individual can say a word and then say what is left after dropping out designated sounds. For the first two items, the examiner says compound words and asks the examinee to say that word and then say the word that remains after dropping one of the compound words. For the remaining items, the individual listens to a word and repeats that word and then is asked to say the word without a specific sound. For example, the examinee is instructed, “Say bold.” After repeating “bold,” the examinee is told, “Now say ‘bold’ without saying /b/.” The correct response is “old.”</li>
<li>'''Memory for Digits''': This 28-item subtest measures the extent to which an individual can repeat a series of numbers ranging in length from two to eight digits. After the individual has listened to a series of audio-recorded numbers presented at a rate of 2 per second, he or she is asked to repeat the numbers in the same order in which they were heard.</li>
<li>'''Rapid Digit Naming''': This 36-item subtest measures the speed with which an individual can name numbers. The Picture Book contains one page for this subtest, which consists of four rows and nine columns of six randomly arranged numbers (i.e., 2, 3, 4, 5, 7, 8). The examinee is instructed to name the numbers on the top row from left to right, and then name the numbers on the next row from left to right, and so on, until all of the numbers have been named. The individual’s score is the total number of seconds taken to name all of the numbers on the page.</li>
<li>'''Rapid Letter Naming''': This 36-item subtest measures the speed with which an individual can name letters. The Picture Book contains one page for this subtest, which consists of four rows and nine columns of six randomly arranged letters (i.e., a, c, k, n, s, t). The examinee is instructed to name the letters on the top row from left to right, and then name the letters on the next row from left to right, and so on, until all of the letters have been named. The individual’s score is the total number of seconds taken to name all of the letters.</li>
</ul>
fcp4nmexfml70w9zgmfhzykhf778z3o
106
105
2015-10-28T23:01:51Z
Pdonnelly
2
/* WJ-IV */
wikitext
text/x-wiki
=Reading Battery=
We use a variety of standardized measures of reading aptitude as a part of our work in the BDE Lab. Below is a list of the measures we utilize, a description of their use, and details of their administration.
Woodcock-Johnson IV Tests of Achievement (WJ-IV)
Test of Word Reading Efficiency-2 (TOWRE-2)
Weschler Abbreviated Scale of Intelligence (WASI)
Comprehensive Test of Phonological Processing-2 (CTOPP-2)
==WJ-IV==
The [http://www.riversidepublishing.com/products/wj-iv/index.html WJ-IV Tests of Achievement] contains 20 tests measuring reading, mathematics, written language, and academic knowledge.
As a part of the study, only four tests were used, all focusing on basic reading skills and fluency.
<ul type="circle">
<li>'''Letter-Word Identification''':
Letter-Word Identification measures the examinee’s word identification skills, a reading-writing (Grw) ability. The initial items require the individual to identify letters that appear in large type on the examinee’s side of the Test Book. The remaining items require the person to read aloud individual words correctly. The examinee is not required to know the meaning of any word. The items become increasingly difficult as the selected words appear less frequently in written English. </li>
<li>'''Word Attack''': Word Attack measures a person’s ability to apply phonic and structural analysis skills to the pronunciation of unfamiliar printed words, a reading-writing (Grw) ability. The initial items require the individual to produce the sounds for single letters. The remaining items require the person to read aloud letter combinations that are phonically consistent or are regular patterns in English orthography but are nonsense or low-frequency words. The items become more difficult as the complexity of the nonsense words increases.</li>
<li>'''Oral Reading''': Oral Reading is a measure of story reading accuracy and prosody, a reading-writing (Grw) ability. The individual reads aloud sentences that gradually increase in difficulty. Performance is scored for both accuracy and fluency of expression.</li>
<li>'''Sentence Reading Fluency''': Sentence Reading Fluency measures reading rate, requiring both reading-writing (Grw) and cognitive processing speed (Gs) abilities. The task involves reading simple sentences silently and quickly in the Response Booklet, deciding if the statement is true or false, and then circling Yes or No. The difficulty level of the sentences gradually increases to a moderate level. The individual attempts to complete as many items as possible within a 3-minute time limit.</li>
</ul>
==TOWRE-2==
The [http://www.proedinc.com/customer/productview.aspx?id=5074 TOWRE-2] contains two subsets, each of which has four alternate forms.
<ul type="circle">
<li>'''The Sight Word Efficiency''' subtest assesses the number of real words printed in vertical lists that an individual can accurately identify within 45 seconds.</li>
<li>'''Phonemic Decoding Efficiency''' subtest measures the number of pronounceable nonwords presented in vertical lists than an individual can accurately decode within 45 seconds.</li>
</ul>
==WASI==
The [http://www.pearsonclinical.com/psychology/products/100000037/wechsler-abbreviated-scale-of-intelligence--second-edition-wasi-ii.html#tab-details WASI] is a shortened test of general cognitive ability, both verbal and non-verbal.
<ul type="circle">
<li>'''Vocabulary''': The vocabulary subtest has 31 items, including 3 picture items and 28 verbal items. For picture items, the examinee names the object presented visually. For verbal items, the examinee defines words that are presented visually and orally. Vocabulary is designed to measure an examinee’s word knowledge and verbal concept formation. </li>
<li>'''Matrix Reasoning''': The Matrix Reasoning subtest has 30 items. The examinee views a series of incomplete matrices and completes each one by selecting the correct response option. The subtest taps … classification and spatial ability, knowledge of part-whole relationships, simultaneous processing, and perceptual organization. </li>
</ul>
==CTOPP-2==
The [http://www.proedinc.com/customer/productview.aspx?id=5187 CTOPP-2] is multi-part test that measures phonological awareness, phonological memory, and rapid naming skills. We utilized the following tests:
<ul type="circle">
<li>'''Elision''': This 34-item subtest measures the extent to which an individual can say a word and then say what is left after dropping out designated sounds. For the first two items, the examiner says compound words and asks the examinee to say that word and then say the word that remains after dropping one of the compound words. For the remaining items, the individual listens to a word and repeats that word and then is asked to say the word without a specific sound. For example, the examinee is instructed, “Say bold.” After repeating “bold,” the examinee is told, “Now say ‘bold’ without saying /b/.” The correct response is “old.”</li>
<li>'''Memory for Digits''': This 28-item subtest measures the extent to which an individual can repeat a series of numbers ranging in length from two to eight digits. After the individual has listened to a series of audio-recorded numbers presented at a rate of 2 per second, he or she is asked to repeat the numbers in the same order in which they were heard.</li>
<li>'''Rapid Digit Naming''': This 36-item subtest measures the speed with which an individual can name numbers. The Picture Book contains one page for this subtest, which consists of four rows and nine columns of six randomly arranged numbers (i.e., 2, 3, 4, 5, 7, 8). The examinee is instructed to name the numbers on the top row from left to right, and then name the numbers on the next row from left to right, and so on, until all of the numbers have been named. The individual’s score is the total number of seconds taken to name all of the numbers on the page.</li>
<li>'''Rapid Letter Naming''': This 36-item subtest measures the speed with which an individual can name letters. The Picture Book contains one page for this subtest, which consists of four rows and nine columns of six randomly arranged letters (i.e., a, c, k, n, s, t). The examinee is instructed to name the letters on the top row from left to right, and then name the letters on the next row from left to right, and so on, until all of the letters have been named. The individual’s score is the total number of seconds taken to name all of the letters.</li>
</ul>
dlrhpi2cmp4aeraxy6gkp9oiqpavyg2
Brain Development & Education Lab
0
4
10
2015-08-13T19:04:16Z
Jyeatman
1
Created page with "Brain Development & Education Lab Wiki"
wikitext
text/x-wiki
Brain Development & Education Lab Wiki
1m4pmg7yyu0e5ke3tvh29wuhq7u7tkz
Cortical Thickness
0
16
113
2015-11-02T21:28:48Z
Jyeatman
1
Created page with "Page describing how we analyze cortical thickness with freesurfer"
wikitext
text/x-wiki
Page describing how we analyze cortical thickness with freesurfer
aeu7uiax1n8f3ubcgk9llux2zil7lro
114
113
2015-11-02T21:30:48Z
Jyeatman
1
wikitext
text/x-wiki
__TOC__
Page describing how we analyze cortical thickness with freesurfer
Step 1: In a terminal, convert the PAR/REC files to nifti images. You may not need to do this if you have already gone through the [[Anatomy_Pipeline]]
cd /home/projects/MRI/[subid]
parrec2nii -c -b *.PAR
Step 2: In MATLAB compute the root mean squared (RMS) image and then ac-pc align. If a subject has multiple images then the rms operation should be run on each image and then a cell-array with paths to all the images can be pushed through mrAnatAverageAcpcNifti resulting in a very nice anatomy
T1path = 'Path to t1 weighted image';
T1path = mri_rms(T1path); % Root mean squared image
im = niftiRead(T1path); % Read root mean squared image
voxres = diag(im.qto_xyz)'; % Get the voxel resolution of the image (mm)
mrAnatAverageAcpcNifti({T1path}, '/home/projects/anatomy/[subid]/t1_acpc.nii.gz', [], voxres(1:3))
==Freesurfer Segmentation==
Freesurfer is a useful tool for segmenting a T1-weighted image and building a cortical mesh. To segment the subject's T1-weighted image using freesurfer from the command line type:
recon-all -i /home/projects/anatomy/[subid]/t1_acpc_.nii.gz -subjid [subid] -all
Or even better use this handy matlab function written by Jon Winawer to run freesurfer and then also build some useful files that we like to use for data visualization such as a high resolution gray/white segmentation.
fs_autosegmentToITK([subid], '/home/projects/anatomy/[subid]/t1_acpc.nii.gz')
l4xj5h7532ad1m9ysmhrt47i6pp23d4
115
114
2015-11-02T21:31:24Z
Jyeatman
1
wikitext
text/x-wiki
__TOC__
Page describing how we analyze cortical thickness with freesurfer
Step 1: In a terminal, convert the PAR/REC files to nifti images. You may not need to do this if you have already gone through the [[Anatomy_Pipeline|Anatomy Pipeline]]
cd /home/projects/MRI/[subid]
parrec2nii -c -b *.PAR
Step 2: In MATLAB compute the root mean squared (RMS) image and then ac-pc align. If a subject has multiple images then the rms operation should be run on each image and then a cell-array with paths to all the images can be pushed through mrAnatAverageAcpcNifti resulting in a very nice anatomy
T1path = 'Path to t1 weighted image';
T1path = mri_rms(T1path); % Root mean squared image
im = niftiRead(T1path); % Read root mean squared image
voxres = diag(im.qto_xyz)'; % Get the voxel resolution of the image (mm)
mrAnatAverageAcpcNifti({T1path}, '/home/projects/anatomy/[subid]/t1_acpc.nii.gz', [], voxres(1:3))
==Freesurfer Segmentation==
Freesurfer is a useful tool for segmenting a T1-weighted image and building a cortical mesh. To segment the subject's T1-weighted image using freesurfer from the command line type:
recon-all -i /home/projects/anatomy/[subid]/t1_acpc_.nii.gz -subjid [subid] -all
Or even better use this handy matlab function written by Jon Winawer to run freesurfer and then also build some useful files that we like to use for data visualization such as a high resolution gray/white segmentation.
fs_autosegmentToITK([subid], '/home/projects/anatomy/[subid]/t1_acpc.nii.gz')
7g65w75f5skh0ivxszm6pdxmjqfb764
116
115
2015-11-02T21:33:37Z
Jyeatman
1
wikitext
text/x-wiki
__TOC__
Page describing how we analyze cortical thickness with freesurfer
==Create a T1 weighted nifti image for the subject==
Step 1: In a terminal, convert the PAR/REC files to nifti images. You may not need to do this if you have already gone through the [[Anatomy_Pipeline|Anatomy Pipeline]]
cd /home/projects/MRI/[subid]
parrec2nii -c -b *.PAR
Step 2: In MATLAB compute the root mean squared (RMS) image. Once again this might have already been done in the [[Anatomy_Pipeline|Anatomy Pipeline]] so you can re-use that RMS image
T1path = 'Path to t1 weighted image';
T1path = mri_rms(T1path); % Root mean squared image
==Freesurfer Segmentation==
Freesurfer is a useful tool for segmenting a T1-weighted image and building a cortical mesh. To segment the subject's T1-weighted image using freesurfer from the command line type:
recon-all -i /home/projects/anatomy/[subid]/t1_acpc_.nii.gz -subjid [subid] -all
g0dybwfiahn713tyfeu4ozk5p0oqhmo
117
116
2015-11-02T21:35:15Z
Jyeatman
1
/* Freesurfer Segmentation */
wikitext
text/x-wiki
__TOC__
Page describing how we analyze cortical thickness with freesurfer
==Create a T1 weighted nifti image for the subject==
Step 1: In a terminal, convert the PAR/REC files to nifti images. You may not need to do this if you have already gone through the [[Anatomy_Pipeline|Anatomy Pipeline]]
cd /home/projects/MRI/[subid]
parrec2nii -c -b *.PAR
Step 2: In MATLAB compute the root mean squared (RMS) image. Once again this might have already been done in the [[Anatomy_Pipeline|Anatomy Pipeline]] so you can re-use that RMS image
T1path = 'Path to t1 weighted image';
T1path = mri_rms(T1path); % Root mean squared image
==Freesurfer Segmentation==
Freesurfer is a useful tool for segmenting a T1-weighted image and building a cortical mesh. To segment the subject's T1-weighted image using freesurfer from the command line type:
recon-all -i /home/projects/anatomy/[subid]/t1_acpc_.nii.gz -subjid [subid] -all
We will follow the steps outlined in the [[https://surfer.nmr.mgh.harvard.edu/fswiki/FsTutorial/LongitudinalTutorial|Freesurfer Wiki]] for analyzing cortical thickness
il1snkj26sqikso3nbwjvcme6sw5vpa
118
117
2015-11-02T21:37:05Z
Jyeatman
1
wikitext
text/x-wiki
__TOC__
Page describing how we analyze cortical thickness with freesurfer
==Create a T1 weighted nifti image for the subject==
Step 1: In a terminal, convert the PAR/REC files to nifti images. You may not need to do this if you have already gone through the [[Anatomy_Pipeline|Anatomy Pipeline]]
cd /home/projects/MRI/[subid]
parrec2nii -c -b *.PAR
Step 2: In MATLAB compute the root mean squared (RMS) image. Once again this might have already been done in the [[Anatomy_Pipeline|Anatomy Pipeline]] so you can re-use that RMS image
T1path = 'Path to t1 weighted image';
T1path = mri_rms(T1path); % Root mean squared image
==Freesurfer Segmentation==
Freesurfer is a useful tool for segmenting a T1-weighted image and building a cortical mesh. To segment the subject's T1-weighted image using freesurfer from the command line type:
recon-all -i /home/projects/anatomy/[subid]/t1_acpc_.nii.gz -subjid [subid] -all
We will follow the steps outlined in the [https://surfer.nmr.mgh.harvard.edu/fswiki/FsTutorial/LongitudinalTutorial Freesurfer Wiki] for analyzing cortical thickness
isonm5r5l4m0vvlgw1ug0i8g9rwgdwh
119
118
2015-11-02T21:45:13Z
Jyeatman
1
wikitext
text/x-wiki
__TOC__
Page describing how we analyze cortical thickness with freesurfer
==Setting up Freesurfer and MATLAB==
First make sure that you have the correct lines in your .bashrc file to run freesurfer:
export FREESURFER_HOME=/usr/local/freesurfer
source $FREESURFER_HOME/SetUpFreeSurfer.sh > /dev/null
export SUBJECTS_DIR=/mnt/diskArray/projects/freesurfer
Next make sure that you have the right tool boxes in your matlab search path. This should be done through your startup.m file
addpath(genpath('~/git/yeatmanlab'));
==Create a T1 weighted nifti image for the subject==
Step 1: In a terminal, convert the PAR/REC files to nifti images. You may not need to do this if you have already gone through the [[Anatomy_Pipeline|Anatomy Pipeline]]
cd /home/projects/MRI/[subid]
parrec2nii -c -b *.PAR
Step 2: In MATLAB compute the root mean squared (RMS) image. Once again this might have already been done in the [[Anatomy_Pipeline|Anatomy Pipeline]] so you can re-use that RMS image
T1path = 'Path to t1 weighted image';
T1path = mri_rms(T1path); % Root mean squared image
==Freesurfer Segmentation==
Freesurfer is a useful tool for segmenting a T1-weighted image and building a cortical mesh. To segment the subject's T1-weighted image using freesurfer from the command line type:
recon-all -i /home/projects/anatomy/[subid]/t1_acpc_.nii.gz -subjid [subid] -all
We will follow the steps outlined in the [https://surfer.nmr.mgh.harvard.edu/fswiki/FsTutorial/LongitudinalTutorial Freesurfer Wiki] for analyzing cortical thickness
nswo96a1909rkzb0gkzaees2t1ma79a
120
119
2015-11-02T21:45:36Z
Jyeatman
1
/* Setting up Freesurfer and MATLAB */
wikitext
text/x-wiki
__TOC__
Page describing how we analyze cortical thickness with freesurfer
==Setting up Freesurfer and MATLAB==
First make sure that you have the correct lines in your .bashrc file to run freesurfer:
export FREESURFER_HOME=/usr/local/freesurfer
source $FREESURFER_HOME/SetUpFreeSurfer.sh > /dev/null
export SUBJECTS_DIR=/mnt/diskArray/projects/freesurfer
Next make sure that you have the right tool boxes in your matlab search path. This should be done through your startup.m file
addpath(genpath('~/git/yeatmanlab'));
==Create a T1 weighted nifti image for the subject==
Step 1: In a terminal, convert the PAR/REC files to nifti images. You may not need to do this if you have already gone through the [[Anatomy_Pipeline|Anatomy Pipeline]]
cd /home/projects/MRI/[subid]
parrec2nii -c -b *.PAR
Step 2: In MATLAB compute the root mean squared (RMS) image. Once again this might have already been done in the [[Anatomy_Pipeline|Anatomy Pipeline]] so you can re-use that RMS image
T1path = 'Path to t1 weighted image';
T1path = mri_rms(T1path); % Root mean squared image
==Freesurfer Segmentation==
Freesurfer is a useful tool for segmenting a T1-weighted image and building a cortical mesh. To segment the subject's T1-weighted image using freesurfer from the command line type:
recon-all -i /home/projects/anatomy/[subid]/t1_acpc_.nii.gz -subjid [subid] -all
We will follow the steps outlined in the [https://surfer.nmr.mgh.harvard.edu/fswiki/FsTutorial/LongitudinalTutorial Freesurfer Wiki] for analyzing cortical thickness
f7ceao7le41xd5un6d9h16zm1t5oscd
121
120
2015-11-03T01:00:40Z
Jyeatman
1
wikitext
text/x-wiki
__TOC__
Page describing how we analyze cortical thickness with freesurfer
==Setting up Freesurfer and MATLAB==
First make sure that you have the correct lines in your .bashrc file to run freesurfer:
export FREESURFER_HOME=/usr/local/freesurfer
source $FREESURFER_HOME/SetUpFreeSurfer.sh > /dev/null
export SUBJECTS_DIR=/mnt/diskArray/projects/freesurfer
Next make sure that you have the right tool boxes in your matlab search path. This should be done through your startup.m file
addpath(genpath('~/git/yeatmanlab'));
==Create a T1 weighted nifti image for the subject==
Step 1: In a terminal, convert the PAR/REC files to nifti images. You may not need to do this if you have already gone through the [[Anatomy_Pipeline|Anatomy Pipeline]]
cd /home/projects/MRI/[subid]
parrec2nii -c -b *.PAR
Step 2: In MATLAB compute the root mean squared (RMS) image. Once again this might have already been done in the [[Anatomy_Pipeline|Anatomy Pipeline]] so you can re-use that RMS image
T1path = 'Path to t1 weighted image';
T1path = mri_rms(T1path); % Root mean squared image
==Freesurfer Segmentation==
Freesurfer is a useful tool for segmenting a T1-weighted image and building a cortical mesh. To segment the subject's T1-weighted image using freesurfer from the command line type:
recon-all -i /home/projects/anatomy/[subid]/t1_acpc_.nii.gz -subjid [subid] -all
We will follow the steps outlined in the [https://surfer.nmr.mgh.harvard.edu/fswiki/FsTutorial/LongitudinalTutorial Freesurfer Wiki] for analyzing cortical thickness. The first steps are described [https://surfer.nmr.mgh.harvard.edu/fswiki/LongitudinalProcessing here]
6c1hyp7rjm89x6jqfnb43113glrlky7
122
121
2015-11-03T01:01:38Z
Jyeatman
1
/* Create a T1 weighted nifti image for the subject */
wikitext
text/x-wiki
__TOC__
Page describing how we analyze cortical thickness with freesurfer
==Setting up Freesurfer and MATLAB==
First make sure that you have the correct lines in your .bashrc file to run freesurfer:
export FREESURFER_HOME=/usr/local/freesurfer
source $FREESURFER_HOME/SetUpFreeSurfer.sh > /dev/null
export SUBJECTS_DIR=/mnt/diskArray/projects/freesurfer
Next make sure that you have the right tool boxes in your matlab search path. This should be done through your startup.m file
addpath(genpath('~/git/yeatmanlab'));
==Create a T1 weighted nifti image for the subject==
Step 1: In a terminal, convert the PAR/REC files to nifti images. You may not need to do this if you have already gone through the [[Anatomy_Pipeline|Anatomy Pipeline]]
cd /mnt/diskArray/projects/MRI/[subid]
parrec2nii -c -b *.PAR
Step 2: In MATLAB compute the root mean squared (RMS) image. Once again this might have already been done in the [[Anatomy_Pipeline|Anatomy Pipeline]] so you can re-use that RMS image
T1path = 'Path to t1 weighted image';
T1path = mri_rms(T1path); % Root mean squared image
==Freesurfer Segmentation==
Freesurfer is a useful tool for segmenting a T1-weighted image and building a cortical mesh. To segment the subject's T1-weighted image using freesurfer from the command line type:
recon-all -i /home/projects/anatomy/[subid]/t1_acpc_.nii.gz -subjid [subid] -all
We will follow the steps outlined in the [https://surfer.nmr.mgh.harvard.edu/fswiki/FsTutorial/LongitudinalTutorial Freesurfer Wiki] for analyzing cortical thickness. The first steps are described [https://surfer.nmr.mgh.harvard.edu/fswiki/LongitudinalProcessing here]
e9puscn7sw3c30vxodvabf6gdqwgcwk
189
122
2015-12-03T21:15:47Z
Pdonnelly
2
/* Freesurfer Segmentation */
wikitext
text/x-wiki
__TOC__
Page describing how we analyze cortical thickness with freesurfer
==Setting up Freesurfer and MATLAB==
First make sure that you have the correct lines in your .bashrc file to run freesurfer:
export FREESURFER_HOME=/usr/local/freesurfer
source $FREESURFER_HOME/SetUpFreeSurfer.sh > /dev/null
export SUBJECTS_DIR=/mnt/diskArray/projects/freesurfer
Next make sure that you have the right tool boxes in your matlab search path. This should be done through your startup.m file
addpath(genpath('~/git/yeatmanlab'));
==Create a T1 weighted nifti image for the subject==
Step 1: In a terminal, convert the PAR/REC files to nifti images. You may not need to do this if you have already gone through the [[Anatomy_Pipeline|Anatomy Pipeline]]
cd /mnt/diskArray/projects/MRI/[subid]
parrec2nii -c -b *.PAR
Step 2: In MATLAB compute the root mean squared (RMS) image. Once again this might have already been done in the [[Anatomy_Pipeline|Anatomy Pipeline]] so you can re-use that RMS image
T1path = 'Path to t1 weighted image';
T1path = mri_rms(T1path); % Root mean squared image
==Freesurfer Segmentation==
Freesurfer is a useful tool for segmenting a T1-weighted image and building a cortical mesh. To segment the subject's T1-weighted image using freesurfer from the command line type:
recon-all -i /home/projects/MRI/[subjid]/[YYYYMMDD]/[subjid]_WIP_MEMP_VBM_SENSE_13_1_MSE.nii.gz -subjid [subid] -all
We will follow the steps outlined in the [https://surfer.nmr.mgh.harvard.edu/fswiki/FsTutorial/LongitudinalTutorial Freesurfer Wiki] for analyzing cortical thickness. The first steps are described [https://surfer.nmr.mgh.harvard.edu/fswiki/LongitudinalProcessing here]
ryg73alq3sv9y51l0cyanfy8iyhofda
Data Analysis
0
2
3
2015-08-13T18:52:01Z
Jyeatman
1
Created page with "This page documents our data analysis"
wikitext
text/x-wiki
This page documents our data analysis
q4myjr48qfypj7glg22pvjt829z3fjt
Data Organization
0
34
191
2015-12-03T21:42:33Z
Pdonnelly
2
Created page with "MRI: NLR_###_FL —Parent YYYYMMDD —Scan Date *.PAR —Raw data *.REC *.nii —Converted nifty images *MSE.nii.gz —Root mean squared image Anatomy: NLR_#..."
wikitext
text/x-wiki
MRI:
NLR_###_FL —Parent
YYYYMMDD —Scan Date
*.PAR —Raw data
*.REC
*.nii —Converted nifty images
*MSE.nii.gz —Root mean squared image
Anatomy:
NLR_###_FL —AC-PC aligned, longitudinal average
t1_acpc.nii.gz
NLR_###_FL_MR# —AC-PC aligned, individual scan
t1_acpc.nii.gz
Freesurfer:
NLR_###_FL_MR# —Individual Segmentation
NLR_###_FL_MR#.long.NLR_###_FL_MRbase —Longitudinal base comp for ea scan
NLR_###_FL_MRbase —Base average segmentation
qdec —Longitudinal data storage
long.qdec.table.dat —data table
4rup2ap8pukvex5efbc9oku4bwjv2i1
192
191
2015-12-03T21:48:48Z
Pdonnelly
2
wikitext
text/x-wiki
In the server, the nomenclature for file naming is as follows. This will allow for ease in adapting to the naming structures within Freesurfer, but also will provide a stable system by which we can keep all of the longitudinal data organized and clear.
==MRI==
Within /mnt/diskArray/projects/MRI:<br>
NLR_###_FL —Parent<br>
YYYYMMDD —Scan Date<br>
*.PAR —Raw data<br>
*.REC<br>
*.nii —Converted nifty images<br>
*MSE.nii.gz —Root mean squared image<br>
Anatomy:
NLR_###_FL —AC-PC aligned, longitudinal average
t1_acpc.nii.gz
NLR_###_FL_MR# —AC-PC aligned, individual scan
t1_acpc.nii.gz
Freesurfer:
NLR_###_FL_MR# —Individual Segmentation
NLR_###_FL_MR#.long.NLR_###_FL_MRbase —Longitudinal base comp for ea scan
NLR_###_FL_MRbase —Base average segmentation
qdec —Longitudinal data storage
long.qdec.table.dat —data table
4y72kj1p0s7vnnhqqun1gib687z693q
193
192
2015-12-03T21:50:12Z
Pdonnelly
2
/* MRI */
wikitext
text/x-wiki
In the server, the nomenclature for file naming is as follows. This will allow for ease in adapting to the naming structures within Freesurfer, but also will provide a stable system by which we can keep all of the longitudinal data organized and clear.
==MRI==
Within /mnt/diskArray/projects/MRI:<br>
NLR_###_FL —Parent<br>
:YYYYMMDD:::—Scan Date<br>
::*.PAR:::—Raw data<br>
::*.REC<br>
::*.nii :::—Converted nifty images<br>
::*MSE.nii.gz:::—Root mean squared image<br>
Anatomy:
NLR_###_FL —AC-PC aligned, longitudinal average
t1_acpc.nii.gz
NLR_###_FL_MR# —AC-PC aligned, individual scan
t1_acpc.nii.gz
Freesurfer:
NLR_###_FL_MR# —Individual Segmentation
NLR_###_FL_MR#.long.NLR_###_FL_MRbase —Longitudinal base comp for ea scan
NLR_###_FL_MRbase —Base average segmentation
qdec —Longitudinal data storage
long.qdec.table.dat —data table
q61i60woie1fwtu8ptdpm638aeml448
194
193
2015-12-03T21:51:46Z
Pdonnelly
2
/* MRI */
wikitext
text/x-wiki
In the server, the nomenclature for file naming is as follows. This will allow for ease in adapting to the naming structures within Freesurfer, but also will provide a stable system by which we can keep all of the longitudinal data organized and clear.
==MRI==
Within /mnt/diskArray/projects/MRI:<br>
NLR_###_FL —Parent<br>
:YYYYMMDD{{pad|4.0em}}—Scan Date<br>
::*.PAR:::—Raw data<br>
::*.REC<br>
::*.nii :::—Converted nifty images<br>
::*MSE.nii.gz:::—Root mean squared image<br>
Anatomy:
NLR_###_FL —AC-PC aligned, longitudinal average
t1_acpc.nii.gz
NLR_###_FL_MR# —AC-PC aligned, individual scan
t1_acpc.nii.gz
Freesurfer:
NLR_###_FL_MR# —Individual Segmentation
NLR_###_FL_MR#.long.NLR_###_FL_MRbase —Longitudinal base comp for ea scan
NLR_###_FL_MRbase —Base average segmentation
qdec —Longitudinal data storage
long.qdec.table.dat —data table
dtceke2uec23qh0cpru0u2kexqf7vfn
195
194
2015-12-03T21:55:15Z
Pdonnelly
2
/* MRI */
wikitext
text/x-wiki
In the server, the nomenclature for file naming is as follows. This will allow for ease in adapting to the naming structures within Freesurfer, but also will provide a stable system by which we can keep all of the longitudinal data organized and clear.
==MRI==
Within /mnt/diskArray/projects/MRI:<br>
NLR_###_FL —Parent<br>
:YYYYMMDD—Scan Date<br>
::*.PAR—Raw data<br>
::*.REC<br>
::*.nii —Converted nifty images<br>
::*MSE.nii.gz—Root mean squared image<br>
==Anatomy==
Within /mnt/diskArray/projects/anatomy:<br>
NLR_###_FL—AC-PC aligned, longitudinal average<br>
:t1_acpc.nii.gz<br>
NLR_###_FL_MR#—AC-PC aligned, individual scan<br>
:t1_acpc.nii.gz<br>
==Freesurfer==
Within /mnt/diskArray/projects/freesurfer:<br>
NLR_###_FL_MR# —Individual Segmentation<br>
NLR_###_FL_MR#.long.NLR_###_FL_MRbase —Longitudinal base comp for ea scan<br>
NLR_###_FL_MRbase —Base average segmentation<br>
qdec —Longitudinal data storage<br>
:long.qdec.table.dat —data table<br>
lv6pcsy51nudy3og3xc39vvnukyc4a4
196
195
2015-12-03T21:56:16Z
Pdonnelly
2
/* MRI */
wikitext
text/x-wiki
In the server, the nomenclature for file naming is as follows. This will allow for ease in adapting to the naming structures within Freesurfer, but also will provide a stable system by which we can keep all of the longitudinal data organized and clear.
==MRI==
Within /mnt/diskArray/projects/MRI:<br>
'''NLR_###_FL''' —Parent<br>
:'''YYYYMMDD'''—Scan Date<br>
::'''*.PAR'''—Raw data<br>
::'''*.REC'''<br>
::'''*.nii '''—Converted nifty images<br>
::'''*MSE.nii.gz'''—Root mean squared image<br>
==Anatomy==
Within /mnt/diskArray/projects/anatomy:<br>
NLR_###_FL—AC-PC aligned, longitudinal average<br>
:t1_acpc.nii.gz<br>
NLR_###_FL_MR#—AC-PC aligned, individual scan<br>
:t1_acpc.nii.gz<br>
==Freesurfer==
Within /mnt/diskArray/projects/freesurfer:<br>
NLR_###_FL_MR# —Individual Segmentation<br>
NLR_###_FL_MR#.long.NLR_###_FL_MRbase —Longitudinal base comp for ea scan<br>
NLR_###_FL_MRbase —Base average segmentation<br>
qdec —Longitudinal data storage<br>
:long.qdec.table.dat —data table<br>
i02t645vsohm03s7gk2ytrpdgoespi8
197
196
2015-12-03T21:59:49Z
Pdonnelly
2
wikitext
text/x-wiki
In the server, the nomenclature for file naming is as follows. This will allow for ease in adapting to the naming structures within Freesurfer, but also will provide a stable system by which we can keep all of the longitudinal data organized and clear.
==MRI==
Within /mnt/diskArray/projects/MRI:<br>
'''NLR_###_FL''' —Parent<br>
:'''YYYYMMDD'''—Scan Date<br>
::'''*.PAR'''—Raw data<br>
::'''*.REC'''<br>
::'''*.nii '''—Converted nifty images<br>
::'''*MSE.nii.gz'''—Root mean squared image<br>
==Anatomy==
Within /mnt/diskArray/projects/anatomy:<br>
'''NLR_###_FL'''—AC-PC aligned, longitudinal average<br>
:'''t1_acpc.nii.gz'''<br>
'''NLR_###_FL_MR#'''—AC-PC aligned, individual scan<br>
:'''t1_acpc.nii.gz'''<br>
==Freesurfer==
Within /mnt/diskArray/projects/freesurfer:<br>
'''NLR_###_FL_MR#''' —Individual Segmentation<br>
'''NLR_###_FL_MR#.long.NLR_###_FL_MRbase''' —Longitudinal base comp for ea scan<br>
'''NLR_###_FL_MRbase''' —Base average segmentation<br>
'''qdec''' —Longitudinal data storage<br>
:'''long.qdec.table.dat''' —data table<br>
qy7ft63c9c5ov0om6lnkf5zvd9r1jv8
198
197
2015-12-03T22:00:42Z
Pdonnelly
2
/* Anatomy */
wikitext
text/x-wiki
In the server, the nomenclature for file naming is as follows. This will allow for ease in adapting to the naming structures within Freesurfer, but also will provide a stable system by which we can keep all of the longitudinal data organized and clear.
==MRI==
Within /mnt/diskArray/projects/MRI:<br>
'''NLR_###_FL''' —Parent<br>
:'''YYYYMMDD'''—Scan Date<br>
::'''*.PAR'''—Raw data<br>
::'''*.REC'''<br>
::'''*.nii '''—Converted nifty images<br>
::'''*MSE.nii.gz'''—Root mean squared image<br>
==Anatomy==
Within /mnt/diskArray/projects/anatomy:<br>
'''NLR_###_FL'''<br>
:'''t1_acpc.nii.gz'''—AC-PC aligned, longitudinal average<br>
'''NLR_###_FL_MR#'''<br>
:'''t1_acpc.nii.gz'''—AC-PC aligned, individual scan<br>
==Freesurfer==
Within /mnt/diskArray/projects/freesurfer:<br>
'''NLR_###_FL_MR#''' —Individual Segmentation<br>
'''NLR_###_FL_MR#.long.NLR_###_FL_MRbase''' —Longitudinal base comp for ea scan<br>
'''NLR_###_FL_MRbase''' —Base average segmentation<br>
'''qdec''' —Longitudinal data storage<br>
:'''long.qdec.table.dat''' —data table<br>
asp6vqdb9hpywwe41yy6cbqyd6myuf5
199
198
2015-12-03T22:01:17Z
Pdonnelly
2
wikitext
text/x-wiki
In the server, the nomenclature for file naming is as follows. This will allow for ease in adapting to the naming structures within Freesurfer, but also will provide a stable system by which we can keep all of the longitudinal data organized and clear.
==MRI==
Within /mnt/diskArray/projects/MRI:<br>
'''NLR_###_FL''' —Parent<br>
:'''YYYYMMDD'''—Scan Date<br>
::'''*.PAR'''—Raw data<br>
::'''*.REC'''<br>
::'''*.nii '''—Converted nifty images<br>
::'''*MSE.nii.gz'''—Root mean squared image<br>
==Anatomy==
Within /mnt/diskArray/projects/anatomy:<br>
'''NLR_###_FL'''<br>
:'''t1_acpc.nii.gz'''—AC-PC aligned, longitudinal average<br>
'''NLR_###_FL_MR#'''<br>
:'''t1_acpc.nii.gz'''—AC-PC aligned, individual scan<br>
==Freesurfer==
Within /mnt/diskArray/projects/freesurfer:<br>
:'''NLR_###_FL_MR#''' —Individual Segmentation<br>
:'''NLR_###_FL_MR#.long.NLR_###_FL_MRbase''' —Longitudinal base comp for ea scan<br>
:'''NLR_###_FL_MRbase''' —Base average segmentation<br>
:'''qdec''' —Longitudinal data storage<br>
::'''long.qdec.table.dat''' —data table<br>
2zhrsoonbm7c3w1jhhdzblq5pg0gtx1
Diffusion Pipeline
0
7
33
2015-09-01T19:05:24Z
Jyeatman
1
Created page with "Describe our tools for processing dMRI data"
wikitext
text/x-wiki
Describe our tools for processing dMRI data
tl1g1is1jm3gk2f983fbvmltzwnfzg8
52
33
2015-09-30T23:22:36Z
Jyeatman
1
wikitext
text/x-wiki
==Preprocess diffusion data==
If diffusion data was acquired on the subject we want to (a) correct for EPI distortions in the data using FSL's topup tool; (b) correct for subject motion and eddy currents; (c) fit a tensor model and create a dt6.mat file; (d) fit the CSD model with mrtrix; (e) run AFQ to segment the fibers into all the major fiber groups. Jason Yeatman has written a helpful utility to run FSL's topup and eddy functions:
fsl_preprocess(dwi_files, bvecs_file, bvals_file, pe_dir, outdir)
This function is also wrapped within another utility to run this whole pipeline (steps a-e) on a subject
bde_preprocessdiffusion
mcyhwsu9khtwl6pvw8w19cr51f7kjsd
FMRI
0
37
225
2016-01-14T18:26:54Z
Jyeatman
1
Created page with "=Organize the subject's data= Put raw data into: subjid/date/raw Hereafter referred to as the session directory Make event files (parfiles) and put the in subjid/date/Stimul..."
wikitext
text/x-wiki
=Organize the subject's data=
Put raw data into:
subjid/date/raw
Hereafter referred to as the session directory
Make event files (parfiles) and put the in
subjid/date/Stimuli/parfiles
=Initialize an mrVista session=
In a MATLAB terminal change to the session directory and open mrInit
mrInit
cs05lafxm0nwjvw7z8scmtd3rm7he9p
226
225
2016-01-14T18:30:43Z
Jyeatman
1
wikitext
text/x-wiki
=Organize the subject's data=
Put raw data into:
subjid/date/raw
Hereafter referred to as the session directory
Make event files (parfiles) and put the in
subjid/date/Stimuli/parfiles
=Initialize an mrVista session=
In a MATLAB terminal change to the session directory and open mrInit
mrInit
=Open Inplane view=
mrVista
And run motion correction
Analysis -> Motion Compensation -> Within + Between Scan
894v9ns9pzr0hk1kcl65lo02uzljzxb
230
226
2016-01-14T18:32:59Z
Jyeatman
1
wikitext
text/x-wiki
=Organize the subject's data=
Put raw data into:
subjid/date/raw
Hereafter referred to as the session directory
Make event files (parfiles) and put the in
subjid/date/Stimuli/parfiles
=Initialize an mrVista session=
In a MATLAB terminal change to the session directory and open mrInit
mrInit
[[File:mrInit.png|200px|left|alt text]]
=Open Inplane view=
mrVista
And run motion correction
Analysis -> Motion Compensation -> Within + Between Scan
0cqkaxbutv8w63s0wgjv87a2jc2p42a
231
230
2016-01-14T18:33:27Z
Jyeatman
1
wikitext
text/x-wiki
=Organize the subject's data=
Put raw data into:
subjid/date/raw
Hereafter referred to as the session directory
Make event files (parfiles) and put the in
subjid/date/Stimuli/parfiles
=Initialize an mrVista session=
In a MATLAB terminal change to the session directory and open mrInit
mrInit
[[File:mrInit.png|400px|center|mrInit]]
=Open Inplane view=
mrVista
And run motion correction
Analysis -> Motion Compensation -> Within + Between Scan
i3m88oo3h7sqivcxjoxzf7u3cptc0c0
232
231
2016-01-14T18:35:09Z
Jyeatman
1
/* Initialize an mrVista session */
wikitext
text/x-wiki
=Organize the subject's data=
Put raw data into:
subjid/date/raw
Hereafter referred to as the session directory
Make event files (parfiles) and put the in
subjid/date/Stimuli/parfiles
=Initialize an mrVista session=
In a MATLAB terminal change to the session directory and open mrInit
mrInit
[[File:mrInit.png|400px|center|mrInit]]
[[File:mrInit_SessionDesc.png|400px|center]]
=Open Inplane view=
mrVista
And run motion correction
Analysis -> Motion Compensation -> Within + Between Scan
fwwr0zlr6szw6zra0490twh7zhsqtbg
233
232
2016-01-14T18:35:44Z
Jyeatman
1
/* Initialize an mrVista session */
wikitext
text/x-wiki
=Organize the subject's data=
Put raw data into:
subjid/date/raw
Hereafter referred to as the session directory
Make event files (parfiles) and put the in
subjid/date/Stimuli/parfiles
=Initialize an mrVista session=
In a MATLAB terminal change to the session directory and open mrInit
mrInit
Add files to mrVista session
[[File:mrInit.png|400px|center|mrInit]]
Fill in the session description
[[File:mrInit_SessionDesc.png|400px|center]]
=Open Inplane view=
mrVista
And run motion correction
Analysis -> Motion Compensation -> Within + Between Scan
2hj6av0nlgvzdoltblprm7nnsiqnsdm
234
233
2016-01-14T18:36:58Z
Jyeatman
1
/* Open Inplane view */
wikitext
text/x-wiki
=Organize the subject's data=
Put raw data into:
subjid/date/raw
Hereafter referred to as the session directory
Make event files (parfiles) and put the in
subjid/date/Stimuli/parfiles
=Initialize an mrVista session=
In a MATLAB terminal change to the session directory and open mrInit
mrInit
Add files to mrVista session
[[File:mrInit.png|400px|center|mrInit]]
Fill in the session description
[[File:mrInit_SessionDesc.png|400px|center]]
=Open Inplane view=
mrVista
[[File:Inplaneview.png|400px|center]]
And run motion correction
Analysis -> Motion Compensation -> Within + Between Scan
9ylgc14dkyfrlt59oor47t399qkn25d
235
234
2016-01-14T18:41:13Z
Jyeatman
1
wikitext
text/x-wiki
__TOC__
=Organize the subject's data=
Put raw data into:
subjid/date/raw
Hereafter referred to as the session directory
Make event files (parfiles) and put the in
subjid/date/Stimuli/parfiles
=Initialize an mrVista session=
In a MATLAB terminal change to the session directory and open mrInit
mrInit
Add files to mrVista session
[[File:mrInit.png|400px|center|mrInit]]
Fill in the session description
[[File:mrInit_SessionDesc.png|400px|center]]
=Open Inplane view=
mrVista
[[File:Inplaneview.png|400px|center]]
And run motion correction
Analysis -> Motion Compensation -> Within + Between Scan
=Fit GLM=
First associate a parfile with each scan and then group all the MotionComp scans so that the GLM is fit to all of them
GLM -> Assign Parfiles to Scan
GLM -> Grouping -> Group Scans
GLM -> Apply CLM/Contrast New Code
In the GLM window that comes up set the HRF to SPM Difference of gammas and set the number of TRs (in our case 8) and detrend option to Quadratic.
7m3n60sis1ffn0mwv98h9hpdjglwk8p
FMRI Data Acquisition
0
36
217
2016-01-08T21:11:34Z
Pdonnelly
2
Created page with "=Logistics= ==Scheduling== Be sure to schedule on the DISC calendar using the NLR_fMRI drop-down ==Materials Needed== 1. USB 2. Stimulus computer, charger 3. Dongle 4...."
wikitext
text/x-wiki
=Logistics=
==Scheduling==
Be sure to schedule on the DISC calendar using the NLR_fMRI drop-down
==Materials Needed==
1. USB
2. Stimulus computer, charger
3. Dongle
4. Ensure that WiFi and notifications are turned off on the device being used
5. Subject Information; including MRI Screening Form(s), MR Safe Glasses (if applicable)
==Set-Up==
1. Plug in the "Trigger Output" USB cord to the Stimulus computer
2. Plug in the Video VGA Trigger cord to the stimulus computer using the dongle
3. Press red "trigger switch" button to switch to the rear green LED light indication
4. On the output port console within the glass cabinet on the bottom shelf, press 0, 1, 0, 2, 0, 2, 0, 2 to switch communication between the desktop to the laptop and teh projector in the scanner room
5. Switch button press box to work with numbers as opposed to colors
==Running the Stimulus==
1. open MatLab
2. Navigate to runexperimentLocalizer.m
3. run the command: runexperimentLocalizer(#, subjid_#)
4. you will run the sequence 3 times for each subject. NOTE the scanner itself will trigger the sequence
==Script==
Before the first fMRI scan runs, say the following to the subject:
Alright, [subject name], now it's time to play the game that we practiced. Remember the first thing you're going to see is a gray screen with a white dot in the middle for a little while. After about 16 seconds the game will start and you're gonna see groups of images. They'll either be images of words, objects, or faces. Your job is to be as still as you can and press the button whenever the image repeats right in a row. Are you ready? ... [wait for subject response] ... Alright! Remember from now until you can't hear the machine running anymore you need to be as still as you possibly can. Try your best to wait until the end to scratch an itch and swallow. Got it? ... [wait for subject response] ... Awesome. Here we go!
plm7cqo39kzqx292mlvmdok52w20jkg
218
217
2016-01-08T21:12:47Z
Pdonnelly
2
/* Logistics */
wikitext
text/x-wiki
=Logistics=
==Scheduling==
Be sure to schedule on the DISC calendar using the NLR_fMRI drop-down
==Materials Needed==
1. USB
2. Stimulus computer, charger
3. Dongle
4. Ensure that WiFi and notifications are turned off on the device being used
5. Subject Information; including MRI Screening Form(s), MR Safe Glasses (if applicable)
==Set-Up==
1. Plug in the "Trigger Output" USB cord to the Stimulus computer
2. Plug in the Video VGA Trigger cord to the stimulus computer using the dongle
3. Press red "trigger switch" button to switch to the rear green LED light indication
4. On the output port console within the glass cabinet on the bottom shelf, press 0, 1, 0, 2, 0, 2, 0, 2 to switch communication between the desktop to the laptop and teh projector in the scanner room
5. Switch button press box to work with numbers as opposed to colors
==Running the Stimulus==
1. open MatLab
2. Navigate to runexperimentLocalizer.m
3. run the command: runexperimentLocalizer(#, subjid_#)
4. you will run the sequence 3 times for each subject. NOTE the scanner itself will trigger the sequence
==Script==
Before the first fMRI scan runs, say the following to the subject:
Alright, [subject name], now it's time to play the game that we practiced.
Remember the first thing you're going to see is a gray screen with a white dot in the middle for a little while.
After about 16 seconds the game will start and you're gonna see groups of images.
They'll either be images of words, objects, or faces.
Your job is to be as still as you can and press the button whenever the image repeats right in a row.
Are you ready? ... [wait for subject response] ... Alright!
Remember from now until you can't hear the machine running anymore you need to be as still as you possibly can.
Try your best to wait until the end to scratch an itch and swallow.
Got it? ... [wait for subject response] ... Awesome.
Here we go!
71hx13p2bhnfryyqws0ahqwodioxgni
219
218
2016-01-08T21:14:03Z
Pdonnelly
2
/* Logistics */
wikitext
text/x-wiki
=Logistics=
==Scheduling==
Be sure to schedule on the DISC calendar using the NLR_fMRI drop-down
==Materials Needed==
1. USB
2. Stimulus computer, charger
3. Dongle
4. Ensure that WiFi and notifications are turned off on the device being used
5. Subject Information; including MRI Screening Form(s), MR Safe Glasses (if applicable)
==Set-Up==
1. Plug in the "Trigger Output" USB cord to the Stimulus computer
2. Plug in the Video VGA Trigger cord to the stimulus computer using the dongle
3. Press red "trigger switch" button to switch to the rear green LED light indication
4. On the output port console within the glass cabinet on the bottom shelf, press 0, 1, 0, 2, 0, 2, 0, 2 to switch communication between the desktop to the laptop and teh projector in the scanner room
5. Switch button press box to work with numbers as opposed to colors
=Running the Stimulus=
==Procedure==
1. open MatLab
2. Navigate to runexperimentLocalizer.m
3. run the command: runexperimentLocalizer(#, subjid_#)
4. you will run the sequence 3 times for each subject. NOTE the scanner itself will trigger the sequence
==Script==
Before the first fMRI scan runs, say the following to the subject:
Alright, [subject name], now it's time to play the game that we practiced.
Remember the first thing you're going to see is a gray screen with a white dot in the middle for a little while.
After about 16 seconds the game will start and you're gonna see groups of images.
They'll either be images of words, objects, or faces.
Your job is to be as still as you can and press the button whenever the image repeats right in a row.
Are you ready? ... [wait for subject response] ... Alright!
Remember from now until you can't hear the machine running anymore you need to be as still as you possibly can.
Try your best to wait until the end to scratch an itch and swallow.
Got it? ... [wait for subject response] ... Awesome.
Here we go!
79mtqm5fxi55rdg45mda3ju0qmopevk
220
219
2016-01-08T21:17:43Z
Pdonnelly
2
/* Running the Stimulus */
wikitext
text/x-wiki
=Logistics=
==Scheduling==
Be sure to schedule on the DISC calendar using the NLR_fMRI drop-down
==Materials Needed==
1. USB
2. Stimulus computer, charger
3. Dongle
4. Ensure that WiFi and notifications are turned off on the device being used
5. Subject Information; including MRI Screening Form(s), MR Safe Glasses (if applicable)
==Set-Up==
1. Plug in the "Trigger Output" USB cord to the Stimulus computer
2. Plug in the Video VGA Trigger cord to the stimulus computer using the dongle
3. Press red "trigger switch" button to switch to the rear green LED light indication
4. On the output port console within the glass cabinet on the bottom shelf, press 0, 1, 0, 2, 0, 2, 0, 2 to switch communication between the desktop to the laptop and teh projector in the scanner room
5. Switch button press box to work with numbers as opposed to colors
=Running the Stimulus=
==Procedure==
1. open MatLab
2. Navigate to runexperimentLocalizer.m
3. run the command: runexperimentLocalizer(#, subjid_#)
4. you will run the sequence 3 times for each subject. NOTE the scanner itself will trigger the sequence
==Script==
Before the first fMRI scan runs, say the following to the subject:
Alright, [subject name], now it's time to play the game that we practiced.
Remember the first thing you're going to see is a gray screen with a white dot in the middle for a little while.
After about 16 seconds the game will start and you're gonna see groups of images.
They'll either be images of words, objects, or faces.
Your job is to be as still as you can and press the button whenever the image repeats right in a row.
Are you ready? ... [wait for subject response] ... Alright!
Remember from now until you can't hear the machine running anymore you need to be as still as you possibly can.
Try your best to wait until the end to scratch an itch and swallow.
Got it? ... [wait for subject response] ... Awesome.
Here we go!
In Between each run, say the following:
How was that one, [subject name]?
Are you ready for the (next/last) run? ... [wait for subject response] ...
Great! Remember to keep as still as you possibly can. Here we go.
If the subject moves during a scan, depending on the amount of movement either add extra stress before the next run, or stop the scan and re-run
after reminding him/her to be very still.
rin1aewukxe8blg3fkqgvvgjt2ogint
221
220
2016-01-08T21:18:11Z
Pdonnelly
2
/* Set-Up */
wikitext
text/x-wiki
=Logistics=
==Scheduling==
Be sure to schedule on the DISC calendar using the NLR_fMRI drop-down
==Materials Needed==
1. USB
2. Stimulus computer, charger
3. Dongle
4. Ensure that WiFi and notifications are turned off on the device being used
5. Subject Information; including MRI Screening Form(s), MR Safe Glasses (if applicable)
==Set-Up==
1. Plug in the "Trigger Output" USB cord to the Stimulus computer
2. Plug in the Video VGA Trigger cord to the stimulus computer using the dongle
3. Press red "trigger switch" button to switch to the rear green LED light indication
4. On the output port console within the glass cabinet on the bottom shelf, press 0, 1, 0, 2, 0, 2, 0, 2
to switch communication between the desktop to the laptop and teh projector in the scanner room
5. Switch button press box to work with numbers as opposed to colors
=Running the Stimulus=
==Procedure==
1. open MatLab
2. Navigate to runexperimentLocalizer.m
3. run the command: runexperimentLocalizer(#, subjid_#)
4. you will run the sequence 3 times for each subject. NOTE the scanner itself will trigger the sequence
==Script==
Before the first fMRI scan runs, say the following to the subject:
Alright, [subject name], now it's time to play the game that we practiced.
Remember the first thing you're going to see is a gray screen with a white dot in the middle for a little while.
After about 16 seconds the game will start and you're gonna see groups of images.
They'll either be images of words, objects, or faces.
Your job is to be as still as you can and press the button whenever the image repeats right in a row.
Are you ready? ... [wait for subject response] ... Alright!
Remember from now until you can't hear the machine running anymore you need to be as still as you possibly can.
Try your best to wait until the end to scratch an itch and swallow.
Got it? ... [wait for subject response] ... Awesome.
Here we go!
In Between each run, say the following:
How was that one, [subject name]?
Are you ready for the (next/last) run? ... [wait for subject response] ...
Great! Remember to keep as still as you possibly can. Here we go.
If the subject moves during a scan, depending on the amount of movement either add extra stress before the next run, or stop the scan and re-run
after reminding him/her to be very still.
loyecjmmlrtwnxtbfzxrwsd4keja3u6
222
221
2016-01-12T00:51:29Z
Pdonnelly
2
/* Script */
wikitext
text/x-wiki
=Logistics=
==Scheduling==
Be sure to schedule on the DISC calendar using the NLR_fMRI drop-down
==Materials Needed==
1. USB
2. Stimulus computer, charger
3. Dongle
4. Ensure that WiFi and notifications are turned off on the device being used
5. Subject Information; including MRI Screening Form(s), MR Safe Glasses (if applicable)
==Set-Up==
1. Plug in the "Trigger Output" USB cord to the Stimulus computer
2. Plug in the Video VGA Trigger cord to the stimulus computer using the dongle
3. Press red "trigger switch" button to switch to the rear green LED light indication
4. On the output port console within the glass cabinet on the bottom shelf, press 0, 1, 0, 2, 0, 2, 0, 2
to switch communication between the desktop to the laptop and teh projector in the scanner room
5. Switch button press box to work with numbers as opposed to colors
=Running the Stimulus=
==Procedure==
1. open MatLab
2. Navigate to runexperimentLocalizer.m
3. run the command: runexperimentLocalizer(#, subjid_#)
4. you will run the sequence 3 times for each subject. NOTE the scanner itself will trigger the sequence
==Script==
Before the first fMRI scan runs, say the following to the subject:
Alright, [subject name], now it's time to play the game that we practiced.
Really quick, do me a favor and without moving your head or looking down, can you push the button for me?
Remember the first thing you're going to see is a gray screen with a white dot in the middle for a little while.
After about 16 seconds the game will start and you're gonna see groups of images.
They'll either be images of words, objects, or faces.
Your job is to be as still as you can and press the button whenever the image repeats right in a row.
Are you ready? ... [wait for subject response] ... Alright!
Remember from now until you can't hear the machine running anymore you need to be as still as you possibly can.
Try your best to wait until the end to scratch an itch and swallow.
Got it? ... [wait for subject response] ... Awesome.
Here we go!
In Between each run, say the following:
How was that one, [subject name]?
Are you ready for the (next/last) run? ... [wait for subject response] ...
Great! Remember to keep as still as you possibly can. Here we go.
If the subject moves during a scan, depending on the amount of movement either add extra stress before the next run, or stop the scan and re-run
after reminding him/her to be very still.
3uklvbo0hxd4raehngiblxoaef0oftw
Friends & Affiliates
0
42
245
2016-02-18T23:58:20Z
Pdonnelly
2
Created page with "=Institute for Learning & Brain Sciences= *[http://ilabs.washington.edu I-LABS website] *[http://megwiki.ilabs.uw.edu/ MEG Center]"
wikitext
text/x-wiki
=Institute for Learning & Brain Sciences=
*[http://ilabs.washington.edu I-LABS website]
*[http://megwiki.ilabs.uw.edu/ MEG Center]
c87y7va08kuvckqboigs7w5nov7bhrz
246
245
2016-02-19T00:07:05Z
Pdonnelly
2
/* Institute for Learning & Brain Sciences */
wikitext
text/x-wiki
=University of Washington=
*[http://ilabs.washington.edu Institute for Learning & Brain Sciences]
*[http://megwiki.ilabs.uw.edu/ MEG Center]
*[http://depts.washington.edu/labsn/404.php Laboratory for Auditory Brain Sciences and Neuroengineering]
**[https://sites.google.com/a/uw.edu/labsn/ [LABS]^n Wiki]
*[http://depts.washington.edu/ccdl/ Cognition & Cortical Dynamics Laboratory]
*[http://depts.washington.edu/sphsc/ Department of Speech & Hearing Sciences]
*[http://www.psych.uw.edu/ Department of Psychology]
*[http://escience.washington.edu/ E-Science Institute]
rtnvwnz0ys69wtumuheispsbo2i521g
247
246
2016-02-19T00:07:49Z
Pdonnelly
2
wikitext
text/x-wiki
=University of Washington=
*[http://ilabs.washington.edu Institute for Learning & Brain Sciences]
*[http://megwiki.ilabs.uw.edu/ MEG Center]
*[http://depts.washington.edu/labsn/404.php Laboratory for Auditory Brain Sciences and Neuroengineering]
**[https://sites.google.com/a/uw.edu/labsn/ '[LABS]^n' Wiki]
*[http://depts.washington.edu/ccdl/ Cognition & Cortical Dynamics Laboratory]
*[http://depts.washington.edu/sphsc/ Department of Speech & Hearing Sciences]
*[http://www.psych.uw.edu/ Department of Psychology]
*[http://escience.washington.edu/ E-Science Institute]
j9wts4iqmsxt8qpmn71pxz799eps0ps
248
247
2016-02-19T00:08:39Z
Pdonnelly
2
wikitext
text/x-wiki
=University of Washington=
*[http://ilabs.washington.edu Institute for Learning & Brain Sciences]
*[http://megwiki.ilabs.uw.edu/ MEG Center]
*[http://depts.washington.edu/labsn/404.php Laboratory for Auditory Brain Sciences and Neuroengineering]
**[https://sites.google.com/a/uw.edu/labsn/ [LABS]<sup>n</sup> Wiki]
*[http://depts.washington.edu/ccdl/ Cognition & Cortical Dynamics Laboratory]
*[http://depts.washington.edu/sphsc/ Department of Speech & Hearing Sciences]
*[http://www.psych.uw.edu/ Department of Psychology]
*[http://escience.washington.edu/ E-Science Institute]
qyansfsbstppm1n20cei1iejjmk02pw
249
248
2016-02-19T00:09:31Z
Pdonnelly
2
wikitext
text/x-wiki
=University of Washington=
*[http://ilabs.washington.edu Institute for Learning & Brain Sciences]
*[http://megwiki.ilabs.uw.edu/ MEG Center]
*[http://depts.washington.edu/labsn/404.php Laboratory for Auditory Brain Sciences and Neuroengineering]
**[https://sites.google.com/a/uw.edu/labsn/ (LABS)<sup>n</sup> Wiki]
*[http://depts.washington.edu/ccdl/ Cognition & Cortical Dynamics Laboratory]
*[http://depts.washington.edu/sphsc/ Department of Speech & Hearing Sciences]
*[http://www.psych.uw.edu/ Department of Psychology]
*[http://escience.washington.edu/ E-Science Institute]
6gtzsmacmhftzvac4rk35qeqwua6mvp
250
249
2016-02-19T00:09:50Z
Pdonnelly
2
wikitext
text/x-wiki
=University of Washington=
*[http://ilabs.washington.edu Institute for Learning & Brain Sciences]
*[http://megwiki.ilabs.uw.edu/ MEG Center]
*[http://depts.washington.edu/labsn/404.php Laboratory for Auditory Brain Sciences and Neuroengineering]
**[https://sites.google.com/a/uw.edu/labsn/ (LABS)<sup>N</sup> Wiki]
*[http://depts.washington.edu/ccdl/ Cognition & Cortical Dynamics Laboratory]
*[http://depts.washington.edu/sphsc/ Department of Speech & Hearing Sciences]
*[http://www.psych.uw.edu/ Department of Psychology]
*[http://escience.washington.edu/ E-Science Institute]
or8hmmh2gpe7vqj0ndxikj6ilssr6an
Friends Affiliates
0
41
238
2016-02-18T23:52:09Z
Pdonnelly
2
Created page with "=Institute for Learning & Brain Sciences= [http://ilabs.washington.edu I-LABS Website]"
wikitext
text/x-wiki
=Institute for Learning & Brain Sciences=
[http://ilabs.washington.edu I-LABS Website]
mg9ogyfqzqz411xd7xjp3jufdt8idmf
239
238
2016-02-18T23:53:23Z
Pdonnelly
2
/* Institute for Learning & Brain Sciences */
wikitext
text/x-wiki
=Institute for Learning & Brain Sciences=
[http://ilabs.washington.edu I-LABS website]
[http://megwiki.ilabs.uw.edu/ MEG Center]
t1daqm1ar83gch7excwtjxyy71yp5oz
240
239
2016-02-18T23:53:53Z
Pdonnelly
2
wikitext
text/x-wiki
=Institute for Learning & Brain Sciences=
[http://ilabs.washington.edu I-LABS website] <lb>
[http://megwiki.ilabs.uw.edu/ MEG Center]
ch4osyrk20f2i5q2xv8q53boc5vykh4
241
240
2016-02-18T23:55:55Z
Pdonnelly
2
wikitext
text/x-wiki
=Institute for Learning & Brain Sciences=
*[http://ilabs.washington.edu I-LABS website]
*[http://megwiki.ilabs.uw.edu/ MEG Center]
c87y7va08kuvckqboigs7w5nov7bhrz
HCP Access
0
43
264
2016-03-24T22:38:08Z
Dstrodtman
5
Created page with "This page will contain information about obtaining permissions and downloading data, including Aspera connect installation."
wikitext
text/x-wiki
This page will contain information about obtaining permissions and downloading data, including Aspera connect installation.
psy64ft4b36vdi81hmtj8zmobf9afsk
267
264
2016-03-24T22:45:10Z
Dstrodtman
5
Dstrodtman moved page [[Accessing Data]] to [[HCP Access]] without leaving a redirect
wikitext
text/x-wiki
This page will contain information about obtaining permissions and downloading data, including Aspera connect installation.
psy64ft4b36vdi81hmtj8zmobf9afsk
273
267
2016-03-25T21:29:25Z
Dstrodtman
5
wikitext
text/x-wiki
This page will contain information about obtaining permissions and downloading data, including Aspera connect installation.
==Register for an Account==
Anyone accessing data from the Human Connectome Project must have created a user account and signed the data agreement, available here:
https://db.humanconnectome.org
==Aspera Connect==
db Human Connectome should prompt you to install Aspera Connect. Or the installation files can be found here:
http://asperasoft.com/connect
After downloading, close your browser and run
sh aspera-connect-[version].sh
Aspera Connect should be installed into your Applications. This will load the applet icon (a blue 'C' in white). Right click to edit preferences.
In the transfers tab, set your download location. Each subject will download as a *.zip with a corresponding *.zip.md5 file.
db Human Connectome should prompt to add security exception. If not prompted, in the security tab of your Aspera Connect preferences try adding
aspera1.humanconnectome.org
rjtc4mofgqbs7bwz3q6we4j8l5azftm
299
273
2016-06-20T17:39:54Z
Dstrodtman
5
/* Aspera Connect */
wikitext
text/x-wiki
This page will contain information about obtaining permissions and downloading data, including Aspera connect installation.
==Register for an Account==
Anyone accessing data from the Human Connectome Project must have created a user account and signed the data agreement, available here:
https://db.humanconnectome.org
==Aspera Connect==
db Human Connectome should prompt you to install Aspera Connect. Or the installation files can be found here:
http://asperasoft.com/connect
Download to your ~/Downloads folder, close your browser, and in the terminal.
cd ~/Downloads
gunzip apera-connect-[version].tar.gz
sh aspera-connect-[version].sh
Aspera Connect should be installed into your Applications. This will load the applet icon (a blue 'C' in white). Right click to edit preferences.
In the transfers tab, set your download location. Each subject will download as a *.zip with a corresponding *.zip.md5 file.
db Human Connectome should prompt to add security exception. If not prompted, in the security tab of your Aspera Connect preferences try adding
aspera1.humanconnectome.org
r5nq2u8kqd470t18zpqk5rgwromezw4
300
299
2016-06-20T18:49:11Z
Dstrodtman
5
wikitext
text/x-wiki
This page will contain information about obtaining permissions and downloading data, including Aspera connect installation.
'''Note:''' db Human Connectome does not seem to work with Google Chrome. It is known to work for Firefox, so please use that browser to avoid problems.
==Register for an Account==
Anyone accessing data from the Human Connectome Project must have created a user account and signed the data agreement, available here:
https://db.humanconnectome.org
==Aspera Connect==
db Human Connectome should prompt you to install Aspera Connect. Or the installation files can be found here:
http://asperasoft.com/connect
Download to your ~/Downloads folder, close your browser, and in the terminal.
cd ~/Downloads
gunzip apera-connect-[version].tar.gz
sh aspera-connect-[version].sh
Aspera Connect should be installed into your Applications. This will load the applet icon (a blue 'C' in white). Right click to edit preferences.
In the transfers tab, set your download location. Each subject will download as a *.zip with a corresponding *.zip.md5 file.
db Human Connectome should prompt to add security exception. If not prompted, in the security tab of your Aspera Connect preferences try adding
aspera1.humanconnectome.org
smlf6i16g27w3i3xwl9eds7usksbg22
303
300
2016-06-23T18:33:07Z
Dstrodtman
5
wikitext
text/x-wiki
This page will contain information about obtaining permissions and downloading data, including Aspera connect installation.
'''Note:''' db Human Connectome does not seem to work with Google Chrome. It is known to work for Firefox, so please use that browser to avoid problems.
==Register for an Account==
Anyone accessing data from the Human Connectome Project must have created a user account and signed the data agreement, available here:
https://db.humanconnectome.org
==Aspera Connect==
db Human Connectome should prompt you to install Aspera Connect. Or the installation files can be found here:
http://asperasoft.com/connect
Download to your ~/Downloads folder, close your browser, and in the terminal.
cd ~/Downloads
tar -zxvf apera-connect-[version].tar.gz
sh aspera-connect-[version].sh
Aspera Connect should be installed into your Applications. This will load the applet icon (a blue 'C' in white). Right click to edit preferences.
In the transfers tab, set your download location. Each subject will download as a *.zip with a corresponding *.zip.md5 file.
db Human Connectome should prompt to add security exception. If not prompted, in the security tab of your Aspera Connect preferences try adding
aspera1.humanconnectome.org
amdj3djvhu20um6vod5jxqnxtiqp06a
304
303
2016-06-23T19:03:02Z
Dstrodtman
5
wikitext
text/x-wiki
This page will contain information about obtaining permissions and downloading data, including Aspera connect installation.
'''Note:''' db Human Connectome does not seem to work with Google Chrome. It is known to work for Firefox, so please use that browser to avoid problems.
==Register for an Account==
Anyone accessing data from the Human Connectome Project must have created a user account and signed the data agreement, available here:
https://db.humanconnectome.org
==Aspera Connect==
db Human Connectome should prompt you to install Aspera Connect. Or the installation files can be found here:
http://asperasoft.com/connect
Download to your ~/Downloads folder, close your browser, and in the terminal.
cd ~/Downloads
tar -zxvf apera-connect-[version].tar.gz
sh aspera-connect-[version].sh
Aspera Connect should be installed into your Applications. In Mint, this will load the applet icon (a blue 'C' in white). Right click to edit preferences.
In the transfers tab, you can set a static destination for all downloaded files OR set it to prompt you where to save downloaded files.
In Ubuntu, you will have to initiate a download before you can change your settings (the gear in the lower left corner of your Transfers - Aspera Connect window). All downloads default to your desktop, so you will not want to download a large directory until you have set your preferences.
Each subject will download as a *.zip with a corresponding *.zip.md5 file.
db Human Connectome should prompt to add security exception. If not prompted, in the security tab of your Aspera Connect preferences try adding
aspera1.humanconnectome.org
tf5h113g2jtgsb38r0nmhhxa306p4aa
HCP Organization
0
46
270
2016-03-24T22:47:12Z
Dstrodtman
5
Created page with "This section will outline how data should be organized for proper management using BDE Lab software."
wikitext
text/x-wiki
This section will outline how data should be organized for proper management using BDE Lab software.
istbd8n3kqif0bfdfs5bto2tuvx8l67
271
270
2016-03-24T22:47:52Z
Dstrodtman
5
Dstrodtman moved page [[HCP Organized]] to [[HCP Organization]] without leaving a redirect
wikitext
text/x-wiki
This section will outline how data should be organized for proper management using BDE Lab software.
istbd8n3kqif0bfdfs5bto2tuvx8l67
HCP Process
0
44
265
2016-03-24T22:39:29Z
Dstrodtman
5
Created page with "This page will contain information about processing HCP data. This process has been entirely automated, with proper file management."
wikitext
text/x-wiki
This page will contain information about processing HCP data. This process has been entirely automated, with proper file management.
q0s4tulegb585jlzclc4wjbz20c3e8t
268
265
2016-03-24T22:45:48Z
Dstrodtman
5
Dstrodtman moved page [[Processing Data]] to [[HCP Process]]
wikitext
text/x-wiki
This page will contain information about processing HCP data. This process has been entirely automated, with proper file management.
q0s4tulegb585jlzclc4wjbz20c3e8t
274
268
2016-03-25T21:35:58Z
Dstrodtman
5
wikitext
text/x-wiki
This page will contain information about processing HCP data. This process has been entirely automated, with proper file management.
==HCP Extension for AFQ==
Preprocessing of HCP diffusion data for AFQ has been fully automated. Read Matlab help documentation for further explanation
HCP_run_dtiInit
Fixes x flip of bvecs and rounds small bvals (<10) to 0 to comply with dtiInit.
e8gs3wotcpc9acdiiqbcsyt8yxkn7go
275
274
2016-03-25T21:44:47Z
Dstrodtman
5
wikitext
text/x-wiki
This page will contain information about processing HCP data. This process has been entirely automated, with proper file management.
==HCP Extension for AFQ==
Preprocessing of HCP diffusion data for AFQ has been fully automated. Output will increase the file directory size by approximately 50% (resulting in around 2 TB for the 900 subjects diffusion data) Read Matlab help documentation for further explanation
HCP_run_dtiInit
Fixes x flip of bvecs and rounds small bvals (<10) to 0 to comply with dtiInit.
==AFQ for HCP==
Instruction forthcoming...
ljryawai581vq5f8uuits0iug2qvp6d
276
275
2016-03-29T21:04:52Z
Dstrodtman
5
wikitext
text/x-wiki
Software to process HCP data through AFQ is available from https://github.com/yeatmanlab/BrainTools/tree/master/projects/HCP
==HCP Extension for AFQ==
Processing of HCP diffusion data through AFQ has been fully automated. Output will result in a file directory roughly twice as large (resulting in around 2.5 TB for the 900 subjects diffusion data [3.6 TB if zipped files left on drive]).
HCP_run_dtiInit
07p2eyn9gvxhbaiawai7h6jn4dyz42k
Helpful Links
0
30
179
2015-12-01T00:55:21Z
Pdonnelly
2
Created page with "Here are some helpful resources for the methods and information in this Wiki:"
wikitext
text/x-wiki
Here are some helpful resources for the methods and information in this Wiki:
nzt6kg38w8bhytiuifytlcxzs6x9791
ILABS Brain Seminar
0
14
75
2015-10-23T23:30:33Z
Jyeatman
1
Created page with "'''October 29 - Jason Yeatman''' Neuron. 2007 Jul 5;55(1):143-56. Hierarchical coding of letter strings in the ventral stream: dissecting the inner organization of the visual..."
wikitext
text/x-wiki
'''October 29 - Jason Yeatman''' Neuron. 2007 Jul 5;55(1):143-56. Hierarchical coding of letter strings in the ventral stream: dissecting the inner organization of the visual word-form system. Vinckier F1, Dehaene S, Jobert A, Dubus JP, Sigman M, Cohen L.
Abstract
Visual word recognition has been proposed to rely on a hierarchy of increasingly complex neuronal detectors, from individual letters to bigrams and morphemes. We used fMRI to test whether such a hierarchy is present in the left occipitotemporal cortex, at the site of the visual word-form area, and with an anterior-to-posterior progression. We exposed adult readers to (1) false-font strings; (2) strings of infrequent letters; (3) strings of frequent letters but rare bigrams; (4) strings with frequent bigrams but rare quadrigrams; (5) strings with frequent quadrigrams; (6) real words. A gradient of selectivity was observed through the entire span of the occipitotemporal cortex, with activation becoming more selective for higher-level stimuli toward the anterior fusiform region. A similar gradient was also seen in left inferior frontoinsular cortex. Those gradients were asymmetrical in favor of the left hemisphere. We conclude that the left occipitotemporal visual word-form area, far from being a homogeneous structure, presents a high degree of functional and spatial hierarchical organization which must result from a tuning process during reading acquisition.
'''November 5 -''' No Brain Seminar
'''November 12 - Open'''
'''November 19 - Open'''
'''November 26 - Thanksgiving''
'''December 3 - Mark Wronkiewicz'''
83c4z13frwb15d90d4z0famipz2u04d
76
75
2015-10-23T23:31:20Z
Jyeatman
1
wikitext
text/x-wiki
'''October 29 - Jason Yeatman''' Hierarchical coding of letter strings in the ventral stream: dissecting the inner organization of the visual word-form system. Vinckier F1, Dehaene S, Jobert A, Dubus JP, Sigman M, Cohen L. Neuron. 2007 Jul 5;55(1):143-56.
Abstract
Visual word recognition has been proposed to rely on a hierarchy of increasingly complex neuronal detectors, from individual letters to bigrams and morphemes. We used fMRI to test whether such a hierarchy is present in the left occipitotemporal cortex, at the site of the visual word-form area, and with an anterior-to-posterior progression. We exposed adult readers to (1) false-font strings; (2) strings of infrequent letters; (3) strings of frequent letters but rare bigrams; (4) strings with frequent bigrams but rare quadrigrams; (5) strings with frequent quadrigrams; (6) real words. A gradient of selectivity was observed through the entire span of the occipitotemporal cortex, with activation becoming more selective for higher-level stimuli toward the anterior fusiform region. A similar gradient was also seen in left inferior frontoinsular cortex. Those gradients were asymmetrical in favor of the left hemisphere. We conclude that the left occipitotemporal visual word-form area, far from being a homogeneous structure, presents a high degree of functional and spatial hierarchical organization which must result from a tuning process during reading acquisition.
'''November 5 -''' No Brain Seminar
'''November 12 - Open'''
'''November 19 - Open'''
'''November 26 - Thanksgiving''
'''December 3 - Mark Wronkiewicz'''
0qp1dczsl9ejyclupd35svsokc2ah2s
77
76
2015-10-23T23:31:37Z
Jyeatman
1
wikitext
text/x-wiki
'''October 29 - Jason Yeatman''' Hierarchical coding of letter strings in the ventral stream: dissecting the inner organization of the visual word-form system. Vinckier F1, Dehaene S, Jobert A, Dubus JP, Sigman M, Cohen L. Neuron. 2007 Jul 5;55(1):143-56.
Abstract
Visual word recognition has been proposed to rely on a hierarchy of increasingly complex neuronal detectors, from individual letters to bigrams and morphemes. We used fMRI to test whether such a hierarchy is present in the left occipitotemporal cortex, at the site of the visual word-form area, and with an anterior-to-posterior progression. We exposed adult readers to (1) false-font strings; (2) strings of infrequent letters; (3) strings of frequent letters but rare bigrams; (4) strings with frequent bigrams but rare quadrigrams; (5) strings with frequent quadrigrams; (6) real words. A gradient of selectivity was observed through the entire span of the occipitotemporal cortex, with activation becoming more selective for higher-level stimuli toward the anterior fusiform region. A similar gradient was also seen in left inferior frontoinsular cortex. Those gradients were asymmetrical in favor of the left hemisphere. We conclude that the left occipitotemporal visual word-form area, far from being a homogeneous structure, presents a high degree of functional and spatial hierarchical organization which must result from a tuning process during reading acquisition.
'''November 5 -''' No Brain Seminar
'''November 12 - Open'''
'''November 19 - Open'''
'''November 26 - Thanksgiving'''
'''December 3 - Mark Wronkiewicz'''
1z4dsdh211s36tfgpg2grj5jxlsd11y
78
77
2015-10-24T19:22:13Z
Jyeatman
1
wikitext
text/x-wiki
'''October 29 - Jason Yeatman''' Hierarchical coding of letter strings in the ventral stream: dissecting the inner organization of the visual word-form system. Vinckier F1, Dehaene S, Jobert A, Dubus JP, Sigman M, Cohen L. Neuron. 2007 Jul 5;55(1):143-56.
Abstract
Visual word recognition has been proposed to rely on a hierarchy of increasingly complex neuronal detectors, from individual letters to bigrams and morphemes. We used fMRI to test whether such a hierarchy is present in the left occipitotemporal cortex, at the site of the visual word-form area, and with an anterior-to-posterior progression. We exposed adult readers to (1) false-font strings; (2) strings of infrequent letters; (3) strings of frequent letters but rare bigrams; (4) strings with frequent bigrams but rare quadrigrams; (5) strings with frequent quadrigrams; (6) real words. A gradient of selectivity was observed through the entire span of the occipitotemporal cortex, with activation becoming more selective for higher-level stimuli toward the anterior fusiform region. A similar gradient was also seen in left inferior frontoinsular cortex. Those gradients were asymmetrical in favor of the left hemisphere. We conclude that the left occipitotemporal visual word-form area, far from being a homogeneous structure, presents a high degree of functional and spatial hierarchical organization which must result from a tuning process during reading acquisition.
A good related paper for background:
Binder, Jeffrey R., et al. "Tuning of the human left fusiform gyrus to sublexical orthographic structure." Neuroimage 33.2 (2006): 739-748.
'''November 5 -''' No Brain Seminar
'''November 12 - Open'''
'''November 19 - Open'''
'''November 26 - Thanksgiving'''
'''December 3 - Mark Wronkiewicz''' Something cool with decoding and single trial MEG analysis
ipupl43jy31h54toop8w6s7w2waq60n
79
78
2015-10-27T17:36:05Z
Jyeatman
1
wikitext
text/x-wiki
'''October 29 - Jason Yeatman''' Hierarchical coding of letter strings in the ventral stream: dissecting the inner organization of the visual word-form system. Vinckier F1, Dehaene S, Jobert A, Dubus JP, Sigman M, Cohen L. Neuron. 2007 Jul 5;55(1):143-56.
Abstract
Visual word recognition has been proposed to rely on a hierarchy of increasingly complex neuronal detectors, from individual letters to bigrams and morphemes. We used fMRI to test whether such a hierarchy is present in the left occipitotemporal cortex, at the site of the visual word-form area, and with an anterior-to-posterior progression. We exposed adult readers to (1) false-font strings; (2) strings of infrequent letters; (3) strings of frequent letters but rare bigrams; (4) strings with frequent bigrams but rare quadrigrams; (5) strings with frequent quadrigrams; (6) real words. A gradient of selectivity was observed through the entire span of the occipitotemporal cortex, with activation becoming more selective for higher-level stimuli toward the anterior fusiform region. A similar gradient was also seen in left inferior frontoinsular cortex. Those gradients were asymmetrical in favor of the left hemisphere. We conclude that the left occipitotemporal visual word-form area, far from being a homogeneous structure, presents a high degree of functional and spatial hierarchical organization which must result from a tuning process during reading acquisition.
A good related paper for background:
Binder, Jeffrey R., et al. "Tuning of the human left fusiform gyrus to sublexical orthographic structure." Neuroimage 33.2 (2006): 739-748.
'''November 5 -''' No Brain Seminar
'''November 12 - Open'''
'''November 19 - Ariel Rokem & Jason Yeatman''' Data sharing. Scientific transparency and reproducibility has become a major worry among scientists across disciplines and has also seen a lot of recent media attention. In response to these concerns, funding agencies and journals have been revising their policies on making published data openly available. We will lead a discussion on (1) best practices in data sharing, (2) resources that support and facilitate data sharing, (3) what data sharing means for the careers of young scientists.
'''November 26 - Thanksgiving'''
'''December 3 - Mark Wronkiewicz''' Something cool with decoding and single trial MEG analysis
dp5i9i5ecem5ahgwt7t6cxigs3w3bjk
80
79
2015-10-27T18:16:46Z
Jyeatman
1
wikitext
text/x-wiki
'''October 29 - Jason Yeatman''' Hierarchical coding of letter strings in the ventral stream: dissecting the inner organization of the visual word-form system. Vinckier F1, Dehaene S, Jobert A, Dubus JP, Sigman M, Cohen L. Neuron. 2007 Jul 5;55(1):143-56.
Abstract
Visual word recognition has been proposed to rely on a hierarchy of increasingly complex neuronal detectors, from individual letters to bigrams and morphemes. We used fMRI to test whether such a hierarchy is present in the left occipitotemporal cortex, at the site of the visual word-form area, and with an anterior-to-posterior progression. We exposed adult readers to (1) false-font strings; (2) strings of infrequent letters; (3) strings of frequent letters but rare bigrams; (4) strings with frequent bigrams but rare quadrigrams; (5) strings with frequent quadrigrams; (6) real words. A gradient of selectivity was observed through the entire span of the occipitotemporal cortex, with activation becoming more selective for higher-level stimuli toward the anterior fusiform region. A similar gradient was also seen in left inferior frontoinsular cortex. Those gradients were asymmetrical in favor of the left hemisphere. We conclude that the left occipitotemporal visual word-form area, far from being a homogeneous structure, presents a high degree of functional and spatial hierarchical organization which must result from a tuning process during reading acquisition.
A good related paper for background:
Binder, Jeffrey R., et al. "Tuning of the human left fusiform gyrus to sublexical orthographic structure." Neuroimage 33.2 (2006): 739-748.
'''November 5 -''' No Brain Seminar
'''November 12 - Ross Maddox''' Tanner, Darren, Kara Morgan‐Short, and Steven J. Luck. "How inappropriate high‐pass filters can produce artifactual effects and incorrect conclusions in ERP studies of language and cognition." Psychophysiology (2015). http://www.ncbi.nlm.nih.gov/pubmed/25903295
This will be an informal discussion of these issues so please read the paper and plan to participate
'''November 19 - Ariel Rokem & Jason Yeatman''' Data sharing. Scientific transparency and reproducibility has become a major worry among scientists across disciplines and has also seen a lot of recent media attention. In response to these concerns, funding agencies and journals have been revising their policies on making published data openly available. We will lead a discussion on (1) best practices in data sharing, (2) resources that support and facilitate data sharing, (3) what data sharing means for the careers of young scientists.
'''November 26 - Thanksgiving'''
'''December 3 - Mark Wronkiewicz''' Something cool with decoding and single trial MEG analysis
hp7q0dvrftda8ra3jis703nnqo0vhf0
81
80
2015-10-28T18:08:54Z
Jyeatman
1
wikitext
text/x-wiki
'''October 29 - Jason Yeatman''' Hierarchical coding of letter strings in the ventral stream: dissecting the inner organization of the visual word-form system. Vinckier F1, Dehaene S, Jobert A, Dubus JP, Sigman M, Cohen L. Neuron. 2007 Jul 5;55(1):143-56.
Abstract
Visual word recognition has been proposed to rely on a hierarchy of increasingly complex neuronal detectors, from individual letters to bigrams and morphemes. We used fMRI to test whether such a hierarchy is present in the left occipitotemporal cortex, at the site of the visual word-form area, and with an anterior-to-posterior progression. We exposed adult readers to (1) false-font strings; (2) strings of infrequent letters; (3) strings of frequent letters but rare bigrams; (4) strings with frequent bigrams but rare quadrigrams; (5) strings with frequent quadrigrams; (6) real words. A gradient of selectivity was observed through the entire span of the occipitotemporal cortex, with activation becoming more selective for higher-level stimuli toward the anterior fusiform region. A similar gradient was also seen in left inferior frontoinsular cortex. Those gradients were asymmetrical in favor of the left hemisphere. We conclude that the left occipitotemporal visual word-form area, far from being a homogeneous structure, presents a high degree of functional and spatial hierarchical organization which must result from a tuning process during reading acquisition.
A good related paper for background:
Binder, Jeffrey R., et al. "Tuning of the human left fusiform gyrus to sublexical orthographic structure." Neuroimage 33.2 (2006): 739-748.
'''November 5 - No Brain Seminar'''
'''November 12 - Ross Maddox''' Tanner, Darren, Kara Morgan‐Short, and Steven J. Luck. "How inappropriate high‐pass filters can produce artifactual effects and incorrect conclusions in ERP studies of language and cognition." Psychophysiology (2015). http://www.ncbi.nlm.nih.gov/pubmed/25903295
This will be an informal discussion of these issues so please read the paper and plan to participate
'''November 19 - Ariel Rokem & Jason Yeatman''' Data sharing. Scientific transparency and reproducibility has become a major worry among scientists across disciplines and has also seen a lot of recent media attention. In response to these concerns, funding agencies and journals have been revising their policies on making published data openly available. We will lead a discussion on (1) best practices in data sharing, (2) resources that support and facilitate data sharing, (3) what data sharing means for the careers of young scientists.
'''November 26 - Thanksgiving'''
'''December 3 - Mark Wronkiewicz''' Something cool with decoding and single trial MEG analysis
aj7znkbvov2richn92r8ebi8ljusqhj
82
81
2015-10-28T18:10:34Z
Jyeatman
1
wikitext
text/x-wiki
'''October 29 - Jason Yeatman''' Hierarchical coding of letter strings in the ventral stream: dissecting the inner organization of the visual word-form system. Vinckier F1, Dehaene S, Jobert A, Dubus JP, Sigman M, Cohen L. Neuron. 2007 Jul 5;55(1):143-56.
Abstract
Visual word recognition has been proposed to rely on a hierarchy of increasingly complex neuronal detectors, from individual letters to bigrams and morphemes. We used fMRI to test whether such a hierarchy is present in the left occipitotemporal cortex, at the site of the visual word-form area, and with an anterior-to-posterior progression. We exposed adult readers to (1) false-font strings; (2) strings of infrequent letters; (3) strings of frequent letters but rare bigrams; (4) strings with frequent bigrams but rare quadrigrams; (5) strings with frequent quadrigrams; (6) real words. A gradient of selectivity was observed through the entire span of the occipitotemporal cortex, with activation becoming more selective for higher-level stimuli toward the anterior fusiform region. A similar gradient was also seen in left inferior frontoinsular cortex. Those gradients were asymmetrical in favor of the left hemisphere. We conclude that the left occipitotemporal visual word-form area, far from being a homogeneous structure, presents a high degree of functional and spatial hierarchical organization which must result from a tuning process during reading acquisition.
A good related paper for background:
Binder, Jeffrey R., et al. "Tuning of the human left fusiform gyrus to sublexical orthographic structure." Neuroimage 33.2 (2006): 739-748.
'''November 5 - No Brain Seminar'''
'''November 12 - Ross Maddox''' Tanner, Darren, Kara Morgan‐Short, and Steven J. Luck. "How inappropriate high‐pass filters can produce artifactual effects and incorrect conclusions in ERP studies of language and cognition." Psychophysiology (2015). http://www.ncbi.nlm.nih.gov/pubmed/25903295
This will be an informal discussion of these issues so please read the paper and plan to participate
'''November 19 - Ariel Rokem & Jason Yeatman''' Data sharing. Scientific transparency and reproducibility has become a major worry among scientists across disciplines and has also seen a lot of recent media attention. In response to these concerns, funding agencies and journals have been revising their policies on making published data openly available. We will lead a discussion on (1) best practices in data sharing, (2) resources that support and facilitate data sharing, (3) what data sharing means for the careers of young scientists.
'''November 26 - Thanksgiving'''
'''December 3 - Mark Wronkiewicz''' Something cool with decoding and single trial MEG analysis
'''December 10 - Patrick Donnelly''' RAVE-O Reading Intervention Program.
3dov43jzl0l16mqq8wkhxqltdf7ir40
133
82
2015-11-11T23:21:38Z
Jyeatman
1
wikitext
text/x-wiki
'''October 29 - Jason Yeatman''' Hierarchical coding of letter strings in the ventral stream: dissecting the inner organization of the visual word-form system. Vinckier F1, Dehaene S, Jobert A, Dubus JP, Sigman M, Cohen L. Neuron. 2007 Jul 5;55(1):143-56.
Abstract
Visual word recognition has been proposed to rely on a hierarchy of increasingly complex neuronal detectors, from individual letters to bigrams and morphemes. We used fMRI to test whether such a hierarchy is present in the left occipitotemporal cortex, at the site of the visual word-form area, and with an anterior-to-posterior progression. We exposed adult readers to (1) false-font strings; (2) strings of infrequent letters; (3) strings of frequent letters but rare bigrams; (4) strings with frequent bigrams but rare quadrigrams; (5) strings with frequent quadrigrams; (6) real words. A gradient of selectivity was observed through the entire span of the occipitotemporal cortex, with activation becoming more selective for higher-level stimuli toward the anterior fusiform region. A similar gradient was also seen in left inferior frontoinsular cortex. Those gradients were asymmetrical in favor of the left hemisphere. We conclude that the left occipitotemporal visual word-form area, far from being a homogeneous structure, presents a high degree of functional and spatial hierarchical organization which must result from a tuning process during reading acquisition.
A good related paper for background:
Binder, Jeffrey R., et al. "Tuning of the human left fusiform gyrus to sublexical orthographic structure." Neuroimage 33.2 (2006): 739-748.
'''November 5 - No Brain Seminar'''
'''November 12 - Ross Maddox''' Tanner, Darren, Kara Morgan‐Short, and Steven J. Luck. "How inappropriate high‐pass filters can produce artifactual effects and incorrect conclusions in ERP studies of language and cognition." Psychophysiology (2015). http://www.ncbi.nlm.nih.gov/pubmed/25903295
This will be an informal discussion of these issues so please read the paper and plan to participate
'''November 19 - Ariel Rokem & Jason Yeatman''' Data sharing. Scientific transparency and reproducibility has become a major worry among scientists across disciplines and has also seen a lot of recent media attention. In response to these concerns, funding agencies and journals have been revising their policies on making published data openly available. We will lead a discussion on (1) best practices in data sharing, (2) resources that support and facilitate data sharing, (3) what data sharing means for the careers of young scientists.
'''November 26 - Thanksgiving'''
'''December 3 - Mark Wronkiewicz''' Something cool with decoding and single trial MEG analysis
'''December 10 - Alex White''' Divided attention and reading.
'''December 17 - Patrick Donnelly''' RAVE-O Reading Intervention Program.
8imdxx0yasnv1ur5dgnefi2x6pmoeat
209
133
2015-12-15T20:16:28Z
Jyeatman
1
wikitext
text/x-wiki
'''October 29 - Jason Yeatman''' Hierarchical coding of letter strings in the ventral stream: dissecting the inner organization of the visual word-form system. Vinckier F1, Dehaene S, Jobert A, Dubus JP, Sigman M, Cohen L. Neuron. 2007 Jul 5;55(1):143-56.
Abstract
Visual word recognition has been proposed to rely on a hierarchy of increasingly complex neuronal detectors, from individual letters to bigrams and morphemes. We used fMRI to test whether such a hierarchy is present in the left occipitotemporal cortex, at the site of the visual word-form area, and with an anterior-to-posterior progression. We exposed adult readers to (1) false-font strings; (2) strings of infrequent letters; (3) strings of frequent letters but rare bigrams; (4) strings with frequent bigrams but rare quadrigrams; (5) strings with frequent quadrigrams; (6) real words. A gradient of selectivity was observed through the entire span of the occipitotemporal cortex, with activation becoming more selective for higher-level stimuli toward the anterior fusiform region. A similar gradient was also seen in left inferior frontoinsular cortex. Those gradients were asymmetrical in favor of the left hemisphere. We conclude that the left occipitotemporal visual word-form area, far from being a homogeneous structure, presents a high degree of functional and spatial hierarchical organization which must result from a tuning process during reading acquisition.
A good related paper for background:
Binder, Jeffrey R., et al. "Tuning of the human left fusiform gyrus to sublexical orthographic structure." Neuroimage 33.2 (2006): 739-748.
'''November 5 - No Brain Seminar'''
'''November 12 - Ross Maddox''' Tanner, Darren, Kara Morgan‐Short, and Steven J. Luck. "How inappropriate high‐pass filters can produce artifactual effects and incorrect conclusions in ERP studies of language and cognition." Psychophysiology (2015). http://www.ncbi.nlm.nih.gov/pubmed/25903295
This will be an informal discussion of these issues so please read the paper and plan to participate
'''November 19 - Ariel Rokem & Jason Yeatman''' Data sharing. Scientific transparency and reproducibility has become a major worry among scientists across disciplines and has also seen a lot of recent media attention. In response to these concerns, funding agencies and journals have been revising their policies on making published data openly available. We will lead a discussion on (1) best practices in data sharing, (2) resources that support and facilitate data sharing, (3) what data sharing means for the careers of young scientists.
'''November 26 - Thanksgiving'''
'''December 3 - Mark Wronkiewicz''' Brain computer interface and single trial MEG analysis
'''December 10 - Alex White''' Divided attention and reading.
'''December 17 - Patrick Donnelly''' Dyslexia interventions targeting multiple components of reading.
Annu Rev Psychol. 2012;63:427-52. doi: 10.1146/annurev-psych-120710-100431. Epub 2011 Aug 11.Paperpile
Rapid automatized naming (RAN) and reading fluency: implications for understandingand treatment of reading disabilities.
Norton ES1, Wolf M.
Author information
Abstract
Fluent reading depends on a complex set of cognitive processes that must work together in perfect concert. Rapidautomatized naming (RAN) tasks provide insight into this system, acting as a microcosm of the processes involved in reading. In this review, we examine both RAN and reading fluency and how each has shaped our understandingof reading disabilities. We explore the research that led to our current understanding of the relationships betweenRAN and reading and what makes RAN unique as a cognitive measure. We explore how the automaticity that supports RAN affects reading across development, reading abilities, and languages, and the biological bases of these processes. Finally, we bring these converging areas of knowledge together by examining what the collective studies of RAN and reading fluency contribute to our goals of creating optimal assessments and interventions that help every child become a fluent, comprehending reader.
J Learn Disabil. 2012 Mar-Apr;45(2):99-127. doi: 10.1177/0022219409355472. Epub 2010 May 5.Paperpile
Multiple-component remediation for developmental reading disabilities: IQ,socioeconomic status, and race as factors in remedial outcome.
Morris RD1, Lovett MW2, Wolf M3, Sevcik RA1, Steinbach KA4, Frijters JC5, Shapiro MB1.
Author information
Abstract
Results from a controlled evaluation of remedial reading interventions are reported: 279 young disabled readers were randomly assigned to a program according to a 2 × 2 × 2 factorial design (IQ, socioeconomic status [SES], and race). The effectiveness of two multiple-component intervention programs for children with reading disabilities(PHAB + RAVE-O; PHAB + WIST) was evaluated against alternate (CSS, MATH) and phonological control programs. Interventions were taught an hour daily for 70 days on a 1:4 ratio at three different sites. Multiple-component programs showed significant improvements relative to control programs on all basic reading skills after 70 hours and at 1-year follow-up. Equivalent gains were observed for different racial, SES, and IQ groups. Thesefactors did not systematically interact with program. Differential outcomes for word identification, fluency, comprehension, and vocabulary were found between the multidimensional programs, although equivalent long-term outcomes and equal continued growth confirmed that different pathways exist to effective readingremediation.
5duakfxb88guku7mf4plpvz9o3vkgrs
210
209
2015-12-16T01:50:07Z
Jyeatman
1
wikitext
text/x-wiki
'''October 29 - Jason Yeatman''' Hierarchical coding of letter strings in the ventral stream: dissecting the inner organization of the visual word-form system. Vinckier F1, Dehaene S, Jobert A, Dubus JP, Sigman M, Cohen L. Neuron. 2007 Jul 5;55(1):143-56.
Abstract
Visual word recognition has been proposed to rely on a hierarchy of increasingly complex neuronal detectors, from individual letters to bigrams and morphemes. We used fMRI to test whether such a hierarchy is present in the left occipitotemporal cortex, at the site of the visual word-form area, and with an anterior-to-posterior progression. We exposed adult readers to (1) false-font strings; (2) strings of infrequent letters; (3) strings of frequent letters but rare bigrams; (4) strings with frequent bigrams but rare quadrigrams; (5) strings with frequent quadrigrams; (6) real words. A gradient of selectivity was observed through the entire span of the occipitotemporal cortex, with activation becoming more selective for higher-level stimuli toward the anterior fusiform region. A similar gradient was also seen in left inferior frontoinsular cortex. Those gradients were asymmetrical in favor of the left hemisphere. We conclude that the left occipitotemporal visual word-form area, far from being a homogeneous structure, presents a high degree of functional and spatial hierarchical organization which must result from a tuning process during reading acquisition.
A good related paper for background:
Binder, Jeffrey R., et al. "Tuning of the human left fusiform gyrus to sublexical orthographic structure." Neuroimage 33.2 (2006): 739-748.
'''November 5 - No Brain Seminar'''
'''November 12 - Ross Maddox''' Tanner, Darren, Kara Morgan‐Short, and Steven J. Luck. "How inappropriate high‐pass filters can produce artifactual effects and incorrect conclusions in ERP studies of language and cognition." Psychophysiology (2015). http://www.ncbi.nlm.nih.gov/pubmed/25903295
This will be an informal discussion of these issues so please read the paper and plan to participate
'''November 19 - Ariel Rokem & Jason Yeatman''' Data sharing. Scientific transparency and reproducibility has become a major worry among scientists across disciplines and has also seen a lot of recent media attention. In response to these concerns, funding agencies and journals have been revising their policies on making published data openly available. We will lead a discussion on (1) best practices in data sharing, (2) resources that support and facilitate data sharing, (3) what data sharing means for the careers of young scientists.
'''November 26 - Thanksgiving'''
'''December 3 - Mark Wronkiewicz''' Brain computer interface and single trial MEG analysis
'''December 10 - Alex White''' Divided attention and reading.
'''December 17 - Patrick Donnelly''' Dyslexia interventions targeting multiple components of reading.
Annu Rev Psychol. 2012;63:427-52. doi: 10.1146/annurev-psych-120710-100431. Epub 2011 Aug 11.Paperpile
Rapid automatized naming (RAN) and reading fluency: implications for understandingand treatment of reading disabilities.
Norton ES1, Wolf M.
J Learn Disabil. 2012 Mar-Apr;45(2):99-127. doi: 10.1177/0022219409355472. Epub 2010 May 5.Paperpile
Multiple-component remediation for developmental reading disabilities: IQ,socioeconomic status, and race as factors in remedial outcome.
Morris RD1, Lovett MW2, Wolf M3, Sevcik RA1, Steinbach KA4, Frijters JC5, Shapiro MB1.
'''December 24 - No Brain Seminar'''
'''December 31 - No Brain Seminar'''
'''January 7 - No Brain Seminar'''
'''January 14 - No Brain Seminar'''
'''January 21 - Samu Taulu''' New MEG developments
n2aa0vmokfrwe0z6hp7qnqifgb8ys5r
211
210
2015-12-17T01:21:54Z
Jyeatman
1
wikitext
text/x-wiki
'''October 29 - Jason Yeatman''' Hierarchical coding of letter strings in the ventral stream: dissecting the inner organization of the visual word-form system. Vinckier F1, Dehaene S, Jobert A, Dubus JP, Sigman M, Cohen L. Neuron. 2007 Jul 5;55(1):143-56.
Abstract
Visual word recognition has been proposed to rely on a hierarchy of increasingly complex neuronal detectors, from individual letters to bigrams and morphemes. We used fMRI to test whether such a hierarchy is present in the left occipitotemporal cortex, at the site of the visual word-form area, and with an anterior-to-posterior progression. We exposed adult readers to (1) false-font strings; (2) strings of infrequent letters; (3) strings of frequent letters but rare bigrams; (4) strings with frequent bigrams but rare quadrigrams; (5) strings with frequent quadrigrams; (6) real words. A gradient of selectivity was observed through the entire span of the occipitotemporal cortex, with activation becoming more selective for higher-level stimuli toward the anterior fusiform region. A similar gradient was also seen in left inferior frontoinsular cortex. Those gradients were asymmetrical in favor of the left hemisphere. We conclude that the left occipitotemporal visual word-form area, far from being a homogeneous structure, presents a high degree of functional and spatial hierarchical organization which must result from a tuning process during reading acquisition.
A good related paper for background:
Binder, Jeffrey R., et al. "Tuning of the human left fusiform gyrus to sublexical orthographic structure." Neuroimage 33.2 (2006): 739-748.
'''November 5 - No Brain Seminar'''
'''November 12 - Ross Maddox''' Tanner, Darren, Kara Morgan‐Short, and Steven J. Luck. "How inappropriate high‐pass filters can produce artifactual effects and incorrect conclusions in ERP studies of language and cognition." Psychophysiology (2015). http://www.ncbi.nlm.nih.gov/pubmed/25903295
This will be an informal discussion of these issues so please read the paper and plan to participate
'''November 19 - Ariel Rokem & Jason Yeatman''' Data sharing. Scientific transparency and reproducibility has become a major worry among scientists across disciplines and has also seen a lot of recent media attention. In response to these concerns, funding agencies and journals have been revising their policies on making published data openly available. We will lead a discussion on (1) best practices in data sharing, (2) resources that support and facilitate data sharing, (3) what data sharing means for the careers of young scientists.
'''November 26 - Thanksgiving'''
'''December 3 - Mark Wronkiewicz''' Brain computer interface and single trial MEG analysis
'''December 10 - Alex White''' Divided attention and reading.
'''December 17 - Patrick Donnelly''' Dyslexia interventions targeting multiple components of reading.
Annu Rev Psychol. 2012;63:427-52. doi: 10.1146/annurev-psych-120710-100431. Epub 2011 Aug 11.Paperpile
Rapid automatized naming (RAN) and reading fluency: implications for understandingand treatment of reading disabilities.
Norton ES1, Wolf M.
J Learn Disabil. 2012 Mar-Apr;45(2):99-127. doi: 10.1177/0022219409355472. Epub 2010 May 5.Paperpile
Multiple-component remediation for developmental reading disabilities: IQ,socioeconomic status, and race as factors in remedial outcome.
Morris RD1, Lovett MW2, Wolf M3, Sevcik RA1, Steinbach KA4, Frijters JC5, Shapiro MB1.
'''December 24 - No Brain Seminar'''
'''December 31 - No Brain Seminar'''
'''January 7 - OPEN'''
'''January 14 - OPEN'''
'''January 21 - Samu Taulu''' New MEG developments
'''January 28 - Caitlin Hudac''' Autism, eye movements and ERPs
od5eay1frkmlej5bo4kg0t09nnqvov2
213
211
2015-12-17T23:16:57Z
Jyeatman
1
wikitext
text/x-wiki
'''October 29 - Jason Yeatman''' Hierarchical coding of letter strings in the ventral stream: dissecting the inner organization of the visual word-form system. Vinckier F1, Dehaene S, Jobert A, Dubus JP, Sigman M, Cohen L. Neuron. 2007 Jul 5;55(1):143-56.
Abstract
Visual word recognition has been proposed to rely on a hierarchy of increasingly complex neuronal detectors, from individual letters to bigrams and morphemes. We used fMRI to test whether such a hierarchy is present in the left occipitotemporal cortex, at the site of the visual word-form area, and with an anterior-to-posterior progression. We exposed adult readers to (1) false-font strings; (2) strings of infrequent letters; (3) strings of frequent letters but rare bigrams; (4) strings with frequent bigrams but rare quadrigrams; (5) strings with frequent quadrigrams; (6) real words. A gradient of selectivity was observed through the entire span of the occipitotemporal cortex, with activation becoming more selective for higher-level stimuli toward the anterior fusiform region. A similar gradient was also seen in left inferior frontoinsular cortex. Those gradients were asymmetrical in favor of the left hemisphere. We conclude that the left occipitotemporal visual word-form area, far from being a homogeneous structure, presents a high degree of functional and spatial hierarchical organization which must result from a tuning process during reading acquisition.
A good related paper for background:
Binder, Jeffrey R., et al. "Tuning of the human left fusiform gyrus to sublexical orthographic structure." Neuroimage 33.2 (2006): 739-748.
'''November 5 - No Brain Seminar'''
'''November 12 - Ross Maddox''' Tanner, Darren, Kara Morgan‐Short, and Steven J. Luck. "How inappropriate high‐pass filters can produce artifactual effects and incorrect conclusions in ERP studies of language and cognition." Psychophysiology (2015). http://www.ncbi.nlm.nih.gov/pubmed/25903295
This will be an informal discussion of these issues so please read the paper and plan to participate
'''November 19 - Ariel Rokem & Jason Yeatman''' Data sharing. Scientific transparency and reproducibility has become a major worry among scientists across disciplines and has also seen a lot of recent media attention. In response to these concerns, funding agencies and journals have been revising their policies on making published data openly available. We will lead a discussion on (1) best practices in data sharing, (2) resources that support and facilitate data sharing, (3) what data sharing means for the careers of young scientists.
'''November 26 - Thanksgiving'''
'''December 3 - Mark Wronkiewicz''' Brain computer interface and single trial MEG analysis
'''December 10 - Alex White''' Divided attention and reading.
'''December 17 - Patrick Donnelly''' Dyslexia interventions targeting multiple components of reading.
Annu Rev Psychol. 2012;63:427-52. doi: 10.1146/annurev-psych-120710-100431. Epub 2011 Aug 11.Paperpile
Rapid automatized naming (RAN) and reading fluency: implications for understandingand treatment of reading disabilities.
Norton ES1, Wolf M.
J Learn Disabil. 2012 Mar-Apr;45(2):99-127. doi: 10.1177/0022219409355472. Epub 2010 May 5.Paperpile
Multiple-component remediation for developmental reading disabilities: IQ,socioeconomic status, and race as factors in remedial outcome.
Morris RD1, Lovett MW2, Wolf M3, Sevcik RA1, Steinbach KA4, Frijters JC5, Shapiro MB1.
'''December 24 - No Brain Seminar'''
'''December 31 - No Brain Seminar'''
'''January 7 - Christina Zhao''' EEG project looking at how tempo and temporal structure type influence temporal structure processing in adults.
'''January 14 - OPEN'''
'''January 21 - Samu Taulu''' New MEG developments
'''January 28 - Caitlin Hudac''' Autism, eye movements and ERPs
2mkvfz4vaigh27jq7iqqbt6nv9luzda
214
213
2015-12-18T01:09:36Z
Jyeatman
1
wikitext
text/x-wiki
'''October 29 - Jason Yeatman''' Hierarchical coding of letter strings in the ventral stream: dissecting the inner organization of the visual word-form system. Vinckier F1, Dehaene S, Jobert A, Dubus JP, Sigman M, Cohen L. Neuron. 2007 Jul 5;55(1):143-56.
Abstract
Visual word recognition has been proposed to rely on a hierarchy of increasingly complex neuronal detectors, from individual letters to bigrams and morphemes. We used fMRI to test whether such a hierarchy is present in the left occipitotemporal cortex, at the site of the visual word-form area, and with an anterior-to-posterior progression. We exposed adult readers to (1) false-font strings; (2) strings of infrequent letters; (3) strings of frequent letters but rare bigrams; (4) strings with frequent bigrams but rare quadrigrams; (5) strings with frequent quadrigrams; (6) real words. A gradient of selectivity was observed through the entire span of the occipitotemporal cortex, with activation becoming more selective for higher-level stimuli toward the anterior fusiform region. A similar gradient was also seen in left inferior frontoinsular cortex. Those gradients were asymmetrical in favor of the left hemisphere. We conclude that the left occipitotemporal visual word-form area, far from being a homogeneous structure, presents a high degree of functional and spatial hierarchical organization which must result from a tuning process during reading acquisition.
A good related paper for background:
Binder, Jeffrey R., et al. "Tuning of the human left fusiform gyrus to sublexical orthographic structure." Neuroimage 33.2 (2006): 739-748.
'''November 5 - No Brain Seminar'''
'''November 12 - Ross Maddox''' Tanner, Darren, Kara Morgan‐Short, and Steven J. Luck. "How inappropriate high‐pass filters can produce artifactual effects and incorrect conclusions in ERP studies of language and cognition." Psychophysiology (2015). http://www.ncbi.nlm.nih.gov/pubmed/25903295
This will be an informal discussion of these issues so please read the paper and plan to participate
'''November 19 - Ariel Rokem & Jason Yeatman''' Data sharing. Scientific transparency and reproducibility has become a major worry among scientists across disciplines and has also seen a lot of recent media attention. In response to these concerns, funding agencies and journals have been revising their policies on making published data openly available. We will lead a discussion on (1) best practices in data sharing, (2) resources that support and facilitate data sharing, (3) what data sharing means for the careers of young scientists.
'''November 26 - Thanksgiving'''
'''December 3 - Mark Wronkiewicz''' Brain computer interface and single trial MEG analysis
'''December 10 - Alex White''' Divided attention and reading.
'''December 17 - Patrick Donnelly''' Dyslexia interventions targeting multiple components of reading.
Annu Rev Psychol. 2012;63:427-52. doi: 10.1146/annurev-psych-120710-100431. Epub 2011 Aug 11.Paperpile
Rapid automatized naming (RAN) and reading fluency: implications for understandingand treatment of reading disabilities.
Norton ES1, Wolf M.
J Learn Disabil. 2012 Mar-Apr;45(2):99-127. doi: 10.1177/0022219409355472. Epub 2010 May 5.Paperpile
Multiple-component remediation for developmental reading disabilities: IQ,socioeconomic status, and race as factors in remedial outcome.
Morris RD1, Lovett MW2, Wolf M3, Sevcik RA1, Steinbach KA4, Frijters JC5, Shapiro MB1.
'''December 24 - No Brain Seminar'''
'''December 31 - No Brain Seminar'''
'''January 7 - Christina Zhao''' EEG project looking at how tempo and temporal structure type influence temporal structure processing in adults.
'''January 14 - OPEN'''
'''January 21 - Samu Taulu''' New MEG developments
'''January 28 - Caitlin Hudac''' ''The eyes have it: Potential uses for the integration of eye tracking (ET) and single-trial MEG/EEG.'' I will describe my recent work using single-trial ERP and EEG analyses to describe signal habituation of the social brain in autism. However, it is important to consider how dynamic behavioral changes (e.g., areas of visual attention) may relate to reduced social brain responses. I will propose a potential new project integrating ET-MEG and/or ET-EEG and seek feedback from the group.
swq4pisn80mz1o2yfc44420nie2r9px
223
214
2016-01-12T22:23:59Z
Jyeatman
1
wikitext
text/x-wiki
'''October 29 - Jason Yeatman''' Hierarchical coding of letter strings in the ventral stream: dissecting the inner organization of the visual word-form system. Vinckier F1, Dehaene S, Jobert A, Dubus JP, Sigman M, Cohen L. Neuron. 2007 Jul 5;55(1):143-56.
Abstract
Visual word recognition has been proposed to rely on a hierarchy of increasingly complex neuronal detectors, from individual letters to bigrams and morphemes. We used fMRI to test whether such a hierarchy is present in the left occipitotemporal cortex, at the site of the visual word-form area, and with an anterior-to-posterior progression. We exposed adult readers to (1) false-font strings; (2) strings of infrequent letters; (3) strings of frequent letters but rare bigrams; (4) strings with frequent bigrams but rare quadrigrams; (5) strings with frequent quadrigrams; (6) real words. A gradient of selectivity was observed through the entire span of the occipitotemporal cortex, with activation becoming more selective for higher-level stimuli toward the anterior fusiform region. A similar gradient was also seen in left inferior frontoinsular cortex. Those gradients were asymmetrical in favor of the left hemisphere. We conclude that the left occipitotemporal visual word-form area, far from being a homogeneous structure, presents a high degree of functional and spatial hierarchical organization which must result from a tuning process during reading acquisition.
A good related paper for background:
Binder, Jeffrey R., et al. "Tuning of the human left fusiform gyrus to sublexical orthographic structure." Neuroimage 33.2 (2006): 739-748.
'''November 5 - No Brain Seminar'''
'''November 12 - Ross Maddox''' Tanner, Darren, Kara Morgan‐Short, and Steven J. Luck. "How inappropriate high‐pass filters can produce artifactual effects and incorrect conclusions in ERP studies of language and cognition." Psychophysiology (2015). http://www.ncbi.nlm.nih.gov/pubmed/25903295
This will be an informal discussion of these issues so please read the paper and plan to participate
'''November 19 - Ariel Rokem & Jason Yeatman''' Data sharing. Scientific transparency and reproducibility has become a major worry among scientists across disciplines and has also seen a lot of recent media attention. In response to these concerns, funding agencies and journals have been revising their policies on making published data openly available. We will lead a discussion on (1) best practices in data sharing, (2) resources that support and facilitate data sharing, (3) what data sharing means for the careers of young scientists.
'''November 26 - Thanksgiving'''
'''December 3 - Mark Wronkiewicz''' Brain computer interface and single trial MEG analysis
'''December 10 - Alex White''' Divided attention and reading.
'''December 17 - Patrick Donnelly''' Dyslexia interventions targeting multiple components of reading.
Annu Rev Psychol. 2012;63:427-52. doi: 10.1146/annurev-psych-120710-100431. Epub 2011 Aug 11.Paperpile
Rapid automatized naming (RAN) and reading fluency: implications for understandingand treatment of reading disabilities.
Norton ES1, Wolf M.
J Learn Disabil. 2012 Mar-Apr;45(2):99-127. doi: 10.1177/0022219409355472. Epub 2010 May 5.Paperpile
Multiple-component remediation for developmental reading disabilities: IQ,socioeconomic status, and race as factors in remedial outcome.
Morris RD1, Lovett MW2, Wolf M3, Sevcik RA1, Steinbach KA4, Frijters JC5, Shapiro MB1.
'''December 24 - No Brain Seminar'''
'''December 31 - No Brain Seminar'''
'''January 7 - Christina Zhao''' EEG project looking at how tempo and temporal structure type influence temporal structure processing in adults.
'''January 14 - OPEN'''
'''January 21 - Samu Taulu''' Brief review of basic MEG physics as a basis for new developments
- Why is the distance of the head to the MEG array so important
- What is the time resolution of MEG based on
- Why we shouldn't apply (inverse) models that are too complex for infant data
'''January 28 - Caitlin Hudac''' ''The eyes have it: Potential uses for the integration of eye tracking (ET) and single-trial MEG/EEG.'' I will describe my recent work using single-trial ERP and EEG analyses to describe signal habituation of the social brain in autism. However, it is important to consider how dynamic behavioral changes (e.g., areas of visual attention) may relate to reduced social brain responses. I will propose a potential new project integrating ET-MEG and/or ET-EEG and seek feedback from the group.
ox7ng03dgpkdnqfi8a98xbwg9xpjcqm
MEG Data Acquisition
0
9
54
2015-10-21T00:20:03Z
Jyeatman
1
Created page with "1. After running freesurfer on a subject's T1 anatomy we next need to create a BEM model. cd /home/jyeatman/git/mnefun/bin python run_mne_bem.py --subject NLR_201_GS --lay..."
wikitext
text/x-wiki
1. After running freesurfer on a subject's T1 anatomy we next need to create a BEM model.
cd /home/jyeatman/git/mnefun/bin
python run_mne_bem.py --subject NLR_201_GS --layers 1 --overwrite
czkbeb4khch2wvh23bvr7wh1juhsdt6
55
54
2015-10-23T19:14:25Z
Jyeatman
1
wikitext
text/x-wiki
1. After running freesurfer on a subject's T1 anatomy we next need to create a BEM model.
cd /home/jyeatman/git/mnefun/bin
python run_mne_bem.py --subject NLR_201_GS --layers 1 --overwrite
2. To visualize source localized data
mne_analyze
3. File -> Load Surface -> Select Inflated
aclqwahonz28kooxqfix2z5xu2jhfxu
57
55
2015-10-23T19:15:39Z
Jyeatman
1
wikitext
text/x-wiki
1. After running freesurfer on a subject's T1 anatomy we next need to create a BEM model.
cd /home/jyeatman/git/mnefun/bin
python run_mne_bem.py --subject NLR_201_GS --layers 1 --overwrite
2. To visualize source localized data
mne_analyze
3. File -> Load Surface -> Select Inflated
[[File:LoadInverse-mne analyze.png]]
26hzyjfbksdh5j8g6367i68ul7huvhj
58
57
2015-10-23T19:16:10Z
Jyeatman
1
wikitext
text/x-wiki
1. After running freesurfer on a subject's T1 anatomy we next need to create a BEM model.
cd /home/jyeatman/git/mnefun/bin
python run_mne_bem.py --subject NLR_201_GS --layers 1 --overwrite
2. To visualize source localized data
mne_analyze
3. File -> Load Surface -> Select Inflated
4. File-> Open
[[File:LoadInverse-mne analyze.png]]
2mtaepr11h9zxz7h8khz4gvpuqvrrgu
59
58
2015-10-23T19:49:08Z
Jyeatman
1
wikitext
text/x-wiki
1. After running freesurfer on a subject's T1 anatomy we next need to create a BEM model.
cd /home/jyeatman/git/mnefun/bin
python run_mne_bem.py --subject NLR_201_GS --layers 1 --overwrite
2. Coordinate alignment.
mne_analyze
File -> Load Surface -> Select Inflated
File -> Load Digitizer Data -> sss_fif -> select any raw file
Adjust -> Coordinate Alignment
View -> Show viewer
Click Options -> check Digitizer Data and HPI and landmarks only,
Click each Fiducial location and then click "Align using fiducials"
Save Default
Make a "trans" folder within the subject's directory
Move transform file and rename subj-trans.fif
2. To visualize source localized data
mne_analyze
3. File -> Load Surface -> Select Inflated
4. File-> Open
[[File:LoadInverse-mne analyze.png]]
To adjust sensor plots: Adjust -> Scales
To adjust source visualization: Adjust -> Estimates
ia8qxqswowlxjr1zden9iwg8eizkv0y
63
59
2015-10-23T22:15:05Z
Jyeatman
1
wikitext
text/x-wiki
== Creating a BEM model for source localization ==
After running freesurfer on a subject's T1 anatomy we next need to create a BEM model.
cd /home/jyeatman/git/mnefun/bin
python run_mne_bem.py --subject NLR_201_GS --layers 1 --overwrite
== Aligning MEG sensor data to the BEM model ==
Open mne_analyze to compute the coordinate alignment. In mne_analyze load the subject's surface and digitizer data:
File -> Load Surface -> Select Inflated
File -> Load Digitizer Data -> sss_fif -> select any raw file
Next adjust how these digitized points align with the scalp in the MRI
Adjust -> Coordinate Alignment
View -> Show viewer
Within the viewer window click the "Options" button. Within the "Viewer Options" dialogue check "digitizer data" and "HPI and landmarks only". [[File:Viewer options.png]]
Click Options -> check Digitizer Data and HPI and landmarks only,
Click each Fiducial location and then click "Align using fiducials"
Save Default
Make a "trans" folder within the subject's directory
Move transform file and rename subj-trans.fif
2. To visualize source localized data
mne_analyze
3. File -> Load Surface -> Select Inflated
4. File-> Open
[[File:LoadInverse-mne analyze.png]]
To adjust sensor plots: Adjust -> Scales
To adjust source visualization: Adjust -> Estimates
l6v8akxuva9flpy1vdw6oo341tozdyd
64
63
2015-10-23T22:24:59Z
Jyeatman
1
wikitext
text/x-wiki
== Creating a BEM model for source localization ==
After running freesurfer on a subject's T1 anatomy we next need to create a BEM model.
cd /home/jyeatman/git/mnefun/bin
python run_mne_bem.py --subject NLR_201_GS --layers 1 --overwrite
== Aligning MEG sensor data to the BEM model ==
Open mne_analyze to compute the coordinate alignment. In mne_analyze load the subject's surface and digitizer data:
File -> Load Surface -> Select Inflated
File -> Load Digitizer Data -> sss_fif -> select any raw file
Next adjust how these digitized points align with the scalp in the MRI
Adjust -> Coordinate Alignment
View -> Show viewer
Within the viewer window click the "Options" button. Within the "Viewer Options" dialogue check "digitizer data" and "HPI and landmarks only". [[File:Viewer options.png]]
Next mark the fiducial locations on the scalp. To do this click LAP, RAP and Nasion in the "Adjust coordinate alignment" dialogue and then mark each spot by clicking on the scalp surface . After marking each location click "Align using fiducials". [[File:Mne analyze AdjustCoordAlign.png]]
From this point it is an art of getting as many of the digitizer points as possible to lie on the scalp. In the "Viewer Options" dialogue make the scalp transparent and un-check the "HPI and landmarks only" button. This will let you see where each digitized point lies with respect to the scalp. You want each point on the surface but not below it. Blue points are below the surface. Adjust points manually with the arrow buttons and using the ICP algorithm until you are happy. [[File:Mne analyze viewer.png]]
Once you have aligned everything click "Save Default" in the coodinate adjustment dialogue. This will save out the transform in the subjects folder. Then, make a "trans" folder within the subject's directory and move transform file and rename subj-trans.fif
== Visualizing data in source space
mne_analyze
File -> Load Surface -> Select Inflated
File-> Open
[[File:LoadInverse-mne analyze.png]]
To adjust sensor plots: Adjust -> Scales
To adjust source visualization: Adjust -> Estimates
9t3eb2mb95j0x46glrdgj1z7i89tp1q
65
64
2015-10-23T22:27:00Z
Jyeatman
1
wikitext
text/x-wiki
== Creating a BEM model for source localization ==
After running freesurfer on a subject's T1 anatomy we next need to create a BEM model.
cd /home/jyeatman/git/mnefun/bin
python run_mne_bem.py --subject NLR_201_GS --layers 1 --overwrite
== Aligning MEG sensor data to the BEM model ==
Open mne_analyze to compute the coordinate alignment. In mne_analyze load the subject's surface and digitizer data:
File -> Load Surface -> Select Inflated
File -> Load Digitizer Data -> sss_fif -> select any raw file
Next adjust how these digitized points align with the scalp in the MRI
Adjust -> Coordinate Alignment
View -> Show viewer
Within the viewer window click the "Options" button. Within the "Viewer Options" dialogue check "digitizer data" and "HPI and landmarks only". [[File:Viewer options.png|center]]
Next mark the fiducial locations on the scalp. To do this click LAP, RAP and Nasion in the "Adjust coordinate alignment" dialogue and then mark each spot by clicking on the scalp surface . After marking each location click "Align using fiducials". [[File:Mne analyze AdjustCoordAlign.png]]
From this point it is an art of getting as many of the digitizer points as possible to lie on the scalp. In the "Viewer Options" dialogue make the scalp transparent and un-check the "HPI and landmarks only" button. This will let you see where each digitized point lies with respect to the scalp. You want each point on the surface but not below it. Blue points are below the surface. Adjust points manually with the arrow buttons and using the ICP algorithm until you are happy. [[File:Mne analyze viewer.png|center]]
Once you have aligned everything click "Save Default" in the coodinate adjustment dialogue. This will save out the transform in the subjects folder. Then, make a "trans" folder within the subject's directory and move transform file and rename subj-trans.fif
== Visualizing data in source space
mne_analyze
File -> Load Surface -> Select Inflated
File-> Open
[[File:LoadInverse-mne analyze.png|center]]
To adjust sensor plots: Adjust -> Scales
To adjust source visualization: Adjust -> Estimates
bhbytgy6mcyk85hndjbmfrdamc4727t
66
65
2015-10-23T22:28:10Z
Jyeatman
1
wikitext
text/x-wiki
== Creating a BEM model for source localization ==
After running freesurfer on a subject's T1 anatomy we next need to create a BEM model.
cd /home/jyeatman/git/mnefun/bin
python run_mne_bem.py --subject NLR_201_GS --layers 1 --overwrite
== Aligning MEG sensor data to the BEM model ==
Open mne_analyze to compute the coordinate alignment. In mne_analyze load the subject's surface and digitizer data:
File -> Load Surface -> Select Inflated
File -> Load Digitizer Data -> sss_fif -> select any raw file
Next adjust how these digitized points align with the scalp in the MRI
Adjust -> Coordinate Alignment
View -> Show viewer
Within the viewer window click the "Options" button. Within the "Viewer Options" dialogue check "digitizer data" and "HPI and landmarks only". [[File:Viewer options.png|center]]
Next mark the fiducial locations on the scalp. To do this click LAP, RAP and Nasion in the "Adjust coordinate alignment" dialogue and then mark each spot by clicking on the scalp surface . After marking each location click "Align using fiducials". [[File:Mne analyze AdjustCoordAlign.png|50px|center]]
From this point it is an art of getting as many of the digitizer points as possible to lie on the scalp. In the "Viewer Options" dialogue make the scalp transparent and un-check the "HPI and landmarks only" button. This will let you see where each digitized point lies with respect to the scalp. You want each point on the surface but not below it. Blue points are below the surface. Adjust points manually with the arrow buttons and using the ICP algorithm until you are happy. [[File:Mne analyze viewer.png|center]]
Once you have aligned everything click "Save Default" in the coodinate adjustment dialogue. This will save out the transform in the subjects folder. Then, make a "trans" folder within the subject's directory and move transform file and rename subj-trans.fif
== Visualizing data in source space
mne_analyze
File -> Load Surface -> Select Inflated
File-> Open
[[File:LoadInverse-mne analyze.png|center]]
To adjust sensor plots: Adjust -> Scales
To adjust source visualization: Adjust -> Estimates
2kie08q38ijxpvnbsyoiqw9kg3lbqsi
67
66
2015-10-23T22:28:43Z
Jyeatman
1
wikitext
text/x-wiki
== Creating a BEM model for source localization ==
After running freesurfer on a subject's T1 anatomy we next need to create a BEM model.
cd /home/jyeatman/git/mnefun/bin
python run_mne_bem.py --subject NLR_201_GS --layers 1 --overwrite
== Aligning MEG sensor data to the BEM model ==
Open mne_analyze to compute the coordinate alignment. In mne_analyze load the subject's surface and digitizer data:
File -> Load Surface -> Select Inflated
File -> Load Digitizer Data -> sss_fif -> select any raw file
Next adjust how these digitized points align with the scalp in the MRI
Adjust -> Coordinate Alignment
View -> Show viewer
Within the viewer window click the "Options" button. Within the "Viewer Options" dialogue check "digitizer data" and "HPI and landmarks only". [[File:Viewer options.png|center]]
Next mark the fiducial locations on the scalp. To do this click LAP, RAP and Nasion in the "Adjust coordinate alignment" dialogue and then mark each spot by clicking on the scalp surface . After marking each location click "Align using fiducials". [[File:Mne analyze AdjustCoordAlign.png|200px|center]]
From this point it is an art of getting as many of the digitizer points as possible to lie on the scalp. In the "Viewer Options" dialogue make the scalp transparent and un-check the "HPI and landmarks only" button. This will let you see where each digitized point lies with respect to the scalp. You want each point on the surface but not below it. Blue points are below the surface. Adjust points manually with the arrow buttons and using the ICP algorithm until you are happy. [[File:Mne analyze viewer.png|200px|center]]
Once you have aligned everything click "Save Default" in the coodinate adjustment dialogue. This will save out the transform in the subjects folder. Then, make a "trans" folder within the subject's directory and move transform file and rename subj-trans.fif
== Visualizing data in source space
mne_analyze
File -> Load Surface -> Select Inflated
File-> Open
[[File:LoadInverse-mne analyze.png|center]]
To adjust sensor plots: Adjust -> Scales
To adjust source visualization: Adjust -> Estimates
smj71noot7cctqvb70phjm6ow4ioubv
68
67
2015-10-23T22:29:37Z
Jyeatman
1
wikitext
text/x-wiki
== Creating a BEM model for source localization ==
After running freesurfer on a subject's T1 anatomy we next need to create a BEM model.
cd /home/jyeatman/git/mnefun/bin
python run_mne_bem.py --subject NLR_201_GS --layers 1 --overwrite
== Aligning MEG sensor data to the BEM model ==
Open mne_analyze to compute the coordinate alignment. In mne_analyze load the subject's surface and digitizer data:
File -> Load Surface -> Select Inflated
File -> Load Digitizer Data -> sss_fif -> select any raw file
Next adjust how these digitized points align with the scalp in the MRI
Adjust -> Coordinate Alignment
View -> Show viewer
Within the viewer window click the "Options" button. Within the "Viewer Options" dialogue check "digitizer data" and "HPI and landmarks only". [[File:Viewer options.png|300px|center]]
Next mark the fiducial locations on the scalp. To do this click LAP, RAP and Nasion in the "Adjust coordinate alignment" dialogue and then mark each spot by clicking on the scalp surface . After marking each location click "Align using fiducials". [[File:Mne analyze AdjustCoordAlign.png|300px|center]]
From this point it is an art of getting as many of the digitizer points as possible to lie on the scalp. In the "Viewer Options" dialogue make the scalp transparent and un-check the "HPI and landmarks only" button. This will let you see where each digitized point lies with respect to the scalp. You want each point on the surface but not below it. Blue points are below the surface. Adjust points manually with the arrow buttons and using the ICP algorithm until you are happy. [[File:Mne analyze viewer.png|400px|center]]
Once you have aligned everything click "Save Default" in the coodinate adjustment dialogue. This will save out the transform in the subjects folder. Then, make a "trans" folder within the subject's directory and move transform file and rename subj-trans.fif
== Visualizing data in source space
mne_analyze
File -> Load Surface -> Select Inflated
File-> Open
[[File:LoadInverse-mne analyze.png|300px|center]]
To adjust sensor plots: Adjust -> Scales
To adjust source visualization: Adjust -> Estimates
f4w3r9h2pdb42ifbqthctmyd4zxih5p
69
68
2015-10-23T22:30:31Z
Jyeatman
1
wikitext
text/x-wiki
== Creating a BEM model for source localization ==
After running freesurfer on a subject's T1 anatomy we next need to create a BEM model.
cd /home/jyeatman/git/mnefun/bin
python run_mne_bem.py --subject NLR_201_GS --layers 1 --overwrite
== Aligning MEG sensor data to the BEM model ==
Open mne_analyze to compute the coordinate alignment. In mne_analyze load the subject's surface and digitizer data:
File -> Load Surface -> Select Inflated
File -> Load Digitizer Data -> sss_fif -> select any raw file
Next adjust how these digitized points align with the scalp in the MRI
Adjust -> Coordinate Alignment
View -> Show viewer
Within the viewer window click the "Options" button. Within the "Viewer Options" dialogue check "digitizer data" and "HPI and landmarks only". [[File:Viewer options.png|400px|right]]
Next mark the fiducial locations on the scalp. To do this click LAP, RAP and Nasion in the "Adjust coordinate alignment" dialogue and then mark each spot by clicking on the scalp surface . After marking each location click "Align using fiducials". [[File:Mne analyze AdjustCoordAlign.png|300px|center]]
From this point it is an art of getting as many of the digitizer points as possible to lie on the scalp. In the "Viewer Options" dialogue make the scalp transparent and un-check the "HPI and landmarks only" button. This will let you see where each digitized point lies with respect to the scalp. You want each point on the surface but not below it. Blue points are below the surface. Adjust points manually with the arrow buttons and using the ICP algorithm until you are happy. [[File:Mne analyze viewer.png|400px|center]]
Once you have aligned everything click "Save Default" in the coodinate adjustment dialogue. This will save out the transform in the subjects folder. Then, make a "trans" folder within the subject's directory and move transform file and rename subj-trans.fif
== Visualizing data in source space
mne_analyze
File -> Load Surface -> Select Inflated
File-> Open
[[File:LoadInverse-mne analyze.png|300px|center]]
To adjust sensor plots: Adjust -> Scales
To adjust source visualization: Adjust -> Estimates
o04rn868f5rmmchn3cm0wg59xoah425
70
69
2015-10-23T22:31:08Z
Jyeatman
1
wikitext
text/x-wiki
== Creating a BEM model for source localization ==
After running freesurfer on a subject's T1 anatomy we next need to create a BEM model.
cd /home/jyeatman/git/mnefun/bin
python run_mne_bem.py --subject NLR_201_GS --layers 1 --overwrite
== Aligning MEG sensor data to the BEM model ==
Open mne_analyze to compute the coordinate alignment. In mne_analyze load the subject's surface and digitizer data:
File -> Load Surface -> Select Inflated
File -> Load Digitizer Data -> sss_fif -> select any raw file
Next adjust how these digitized points align with the scalp in the MRI
Adjust -> Coordinate Alignment
View -> Show viewer
Within the viewer window click the "Options" button. Within the "Viewer Options" dialogue check "digitizer data" and "HPI and landmarks only". [[File:Viewer options.png|400px|right]]
Next mark the fiducial locations on the scalp. To do this click LAP, RAP and Nasion in the "Adjust coordinate alignment" dialogue and then mark each spot by clicking on the scalp surface . After marking each location click "Align using fiducials". [[File:Mne analyze AdjustCoordAlign.png|300px|right]]
From this point it is an art of getting as many of the digitizer points as possible to lie on the scalp. In the "Viewer Options" dialogue make the scalp transparent and un-check the "HPI and landmarks only" button. This will let you see where each digitized point lies with respect to the scalp. You want each point on the surface but not below it. Blue points are below the surface. Adjust points manually with the arrow buttons and using the ICP algorithm until you are happy. [[File:Mne analyze viewer.png|400px|right]]
Once you have aligned everything click "Save Default" in the coodinate adjustment dialogue. This will save out the transform in the subjects folder. Then, make a "trans" folder within the subject's directory and move transform file and rename subj-trans.fif
== Visualizing data in source space
mne_analyze
File -> Load Surface -> Select Inflated
File-> Open
[[File:LoadInverse-mne analyze.png|300px|center]]
To adjust sensor plots: Adjust -> Scales
To adjust source visualization: Adjust -> Estimates
c602jope4ggp1ccfqdolxl1ncc6xr97
71
70
2015-10-23T22:32:15Z
Jyeatman
1
wikitext
text/x-wiki
== Creating a BEM model for source localization ==
After running freesurfer on a subject's T1 anatomy we next need to create a BEM model.
cd /home/jyeatman/git/mnefun/bin
python run_mne_bem.py --subject NLR_201_GS --layers 1 --overwrite
== Aligning MEG sensor data to the BEM model ==
Open mne_analyze to compute the coordinate alignment. In mne_analyze load the subject's surface and digitizer data:
File -> Load Surface -> Select Inflated
File -> Load Digitizer Data -> sss_fif -> select any raw file
Next adjust how these digitized points align with the scalp in the MRI
Adjust -> Coordinate Alignment
View -> Show viewer
Within the viewer window click the "Options" button. Within the "Viewer Options" dialogue check "digitizer data" and "HPI and landmarks only". [[File:Viewer options.png|400px|left]]
Next mark the fiducial locations on the scalp. To do this click LAP, RAP and Nasion in the "Adjust coordinate alignment" dialogue and then mark each spot by clicking on the scalp surface . After marking each location click "Align using fiducials". [[File:Mne analyze AdjustCoordAlign.png|300px|left]]
From this point it is an art of getting as many of the digitizer points as possible to lie on the scalp. In the "Viewer Options" dialogue make the scalp transparent and un-check the "HPI and landmarks only" button. This will let you see where each digitized point lies with respect to the scalp. You want each point on the surface but not below it. Blue points are below the surface. Adjust points manually with the arrow buttons and using the ICP algorithm until you are happy. [[File:Mne analyze viewer.png|400px|left]]
Once you have aligned everything click "Save Default" in the coodinate adjustment dialogue. This will save out the transform in the subjects folder. Then, make a "trans" folder within the subject's directory and move transform file and rename subj-trans.fif
== Visualizing data in source space
mne_analyze
File -> Load Surface -> Select Inflated
File-> Open
[[File:LoadInverse-mne analyze.png|300px|center]]
To adjust sensor plots: Adjust -> Scales
To adjust source visualization: Adjust -> Estimates
q2jalh6pm14otxmjb7e89c0gjr2vt3c
72
71
2015-10-23T22:32:56Z
Jyeatman
1
wikitext
text/x-wiki
== Creating a BEM model for source localization ==
After running freesurfer on a subject's T1 anatomy we next need to create a BEM model.
cd /home/jyeatman/git/mnefun/bin
python run_mne_bem.py --subject NLR_201_GS --layers 1 --overwrite
== Aligning MEG sensor data to the BEM model ==
Open mne_analyze to compute the coordinate alignment. In mne_analyze load the subject's surface and digitizer data:
File -> Load Surface -> Select Inflated
File -> Load Digitizer Data -> sss_fif -> select any raw file
Next adjust how these digitized points align with the scalp in the MRI
Adjust -> Coordinate Alignment
View -> Show viewer
Within the viewer window click the "Options" button. Within the "Viewer Options" dialogue check "digitizer data" and "HPI and landmarks only". [[File:Viewer options.png|400px|center]]
Next mark the fiducial locations on the scalp. To do this click LAP, RAP and Nasion in the "Adjust coordinate alignment" dialogue and then mark each spot by clicking on the scalp surface . After marking each location click "Align using fiducials". [[File:Mne analyze AdjustCoordAlign.png|300px|center]]
From this point it is an art of getting as many of the digitizer points as possible to lie on the scalp. In the "Viewer Options" dialogue make the scalp transparent and un-check the "HPI and landmarks only" button. This will let you see where each digitized point lies with respect to the scalp. You want each point on the surface but not below it. Blue points are below the surface. Adjust points manually with the arrow buttons and using the ICP algorithm until you are happy. [[File:Mne analyze viewer.png|400px|center]]
Once you have aligned everything click "Save Default" in the coodinate adjustment dialogue. This will save out the transform in the subjects folder. Then, make a "trans" folder within the subject's directory and move transform file and rename subj-trans.fif
== Visualizing data in source space ==
mne_analyze
File -> Load Surface -> Select Inflated
File-> Open
[[File:LoadInverse-mne analyze.png|300px|center]]
To adjust sensor plots: Adjust -> Scales
To adjust source visualization: Adjust -> Estimates
3joioqhq7awqtg78ndgi4o40oi3ah16
73
72
2015-10-23T22:33:48Z
Jyeatman
1
wikitext
text/x-wiki
__TOC__
== Creating a BEM model for source localization ==
After running freesurfer on a subject's T1 anatomy we next need to create a BEM model.
cd /home/jyeatman/git/mnefun/bin
python run_mne_bem.py --subject NLR_201_GS --layers 1 --overwrite
== Aligning MEG sensor data to the BEM model ==
Open mne_analyze to compute the coordinate alignment. In mne_analyze load the subject's surface and digitizer data:
File -> Load Surface -> Select Inflated
File -> Load Digitizer Data -> sss_fif -> select any raw file
Next adjust how these digitized points align with the scalp in the MRI
Adjust -> Coordinate Alignment
View -> Show viewer
Within the viewer window click the "Options" button. Within the "Viewer Options" dialogue check "digitizer data" and "HPI and landmarks only". [[File:Viewer options.png|400px|center]]
Next mark the fiducial locations on the scalp. To do this click LAP, RAP and Nasion in the "Adjust coordinate alignment" dialogue and then mark each spot by clicking on the scalp surface . After marking each location click "Align using fiducials". [[File:Mne analyze AdjustCoordAlign.png|300px|center]]
From this point it is an art of getting as many of the digitizer points as possible to lie on the scalp. In the "Viewer Options" dialogue make the scalp transparent and un-check the "HPI and landmarks only" button. This will let you see where each digitized point lies with respect to the scalp. You want each point on the surface but not below it. Blue points are below the surface. Adjust points manually with the arrow buttons and using the ICP algorithm until you are happy. [[File:Mne analyze viewer.png|400px|center]]
Once you have aligned everything click "Save Default" in the coodinate adjustment dialogue. This will save out the transform in the subjects folder. Then, make a "trans" folder within the subject's directory and move transform file and rename subj-trans.fif
== Visualizing data in source space ==
mne_analyze
File -> Load Surface -> Select Inflated
File-> Open
[[File:LoadInverse-mne analyze.png|300px|center]]
To adjust sensor plots: Adjust -> Scales
To adjust source visualization: Adjust -> Estimates
7zhbqeltf5zbfvpxrj46nn8vmbx2h2w
MRI Data Acquisition
0
6
31
2015-09-01T19:03:33Z
Jyeatman
1
Created page with "After MRI data has been acquired at the DISC MRI center it should be saved as PAR/REC files. For each subject, the data should be placed in: /mnt/diskArray/projects/MRI/study..."
wikitext
text/x-wiki
After MRI data has been acquired at the DISC MRI center it should be saved as PAR/REC files. For each subject, the data should be placed in:
/mnt/diskArray/projects/MRI/study_sID_initials
/mnt/diskArray/projects/MRI/NLR_001_JY
Next we run the [[Anatomy|anatomical]] and [[Diffusion|diffusion]] data through our typical pipeline.
p9eha90j4v5dby286uwuxj3n9iguwle
32
31
2015-09-01T19:04:37Z
Jyeatman
1
wikitext
text/x-wiki
After MRI data has been acquired at the DISC MRI center it should be saved as PAR/REC files. For each subject, the data should be placed in:
/mnt/diskArray/projects/MRI/study_sID_initials
/mnt/diskArray/projects/MRI/NLR_001_JY
Next we run the [[Anatomy Pipeline]] and [[Diffusion Pipeline]] to preprocess this data that will be used for many purposes.
3ejlaomoydv2iescelrrn97y72aitlo
34
32
2015-09-01T19:07:03Z
Jyeatman
1
wikitext
text/x-wiki
After MRI data has been acquired at the DISC MRI center it should be saved as PAR/REC files. For each subject, the data should be placed in:
/mnt/diskArray/projects/MRI/study_sID_initials_session#
/mnt/diskArray/projects/MRI/NLR_001_JY_1
Next we run the [[Anatomy Pipeline]] and [[Diffusion Pipeline]] to preprocess this data that will be used for many purposes.
lpkc3n6yr48bsgbffiajudpar8d2o0j
35
34
2015-09-01T19:08:31Z
Jyeatman
1
wikitext
text/x-wiki
After MRI data has been acquired at the DISC MRI center it should be saved as PAR/REC files. For each subject, the data should be placed in:
/mnt/diskArray/projects/MRI/study_sID_initials/YearMonthDay/
/mnt/diskArray/projects/MRI/NLR_001_JY/20150926/
Next we run the [[Anatomy Pipeline]] and [[Diffusion Pipeline]] to preprocess this data that will be used for many purposes.
e4rksxat60ku07em3hwil3dms9mzoqf
83
35
2015-10-28T21:18:38Z
Jyeatman
1
wikitext
text/x-wiki
After MRI data has been acquired at the DISC MRI center it should be saved as PAR/REC files. For each subject, the data should be placed in:
/mnt/diskArray/projects/MRI/study_sID_initials/YearMonthDay/raw
For example:
/mnt/diskArray/projects/MRI/NLR_001_JY/20150926/raw
Next we run the [[Anatomy Pipeline]] and [[Diffusion Pipeline]] to preprocess this data that will be used for many purposes.
67f5vuibyfwiqh0fsikiba89ovflhnq
84
83
2015-10-28T21:21:46Z
Jyeatman
1
wikitext
text/x-wiki
After MRI data has been acquired at the DISC MRI center it should be saved as PAR/REC files. For each subject, the data should be placed in:
/mnt/diskArray/projects/MRI/study_sID_initials/YearMonthDay/raw
For example:
/mnt/diskArray/projects/MRI/NLR_001_JY/20150926/raw
When a subject has multiple scan sessions, they will go in separate folders within the subject's main folder, each with the date of the scan, so that we can determine the order of the scans. All the raw data should be within the raw folder so that the outputs of different processing stages can be saved in separate named folders. For example:
/mnt/diskArray/projects/MRI/NLR_001_JY/20150926/dti96trilinrt
/mnt/diskArray/projects/MRI/NLR_001_JY/20150926/mrtrix
Next we run the [[Anatomy Pipeline]] and [[Diffusion Pipeline]] to preprocess this data that will be used for many purposes.
feegryv3m9tjgfs327oknoi4u8as3e4
Main Page
0
1
1
2015-08-13T18:37:13Z
MediaWiki default
0
wikitext
text/x-wiki
<strong>MediaWiki has been successfully installed.</strong>
Consult the [//meta.wikimedia.org/wiki/Help:Contents User's Guide] for information on using the wiki software.
== Getting started ==
* [//www.mediawiki.org/wiki/Special:MyLanguage/Manual:Configuration_settings Configuration settings list]
* [//www.mediawiki.org/wiki/Special:MyLanguage/Manual:FAQ MediaWiki FAQ]
* [https://lists.wikimedia.org/mailman/listinfo/mediawiki-announce MediaWiki release mailing list]
* [//www.mediawiki.org/wiki/Special:MyLanguage/Localisation#Translation_resources Localise MediaWiki for your language]
glba3g2evzm40dqnqxegze66eqibkvb
2
1
2015-08-13T18:45:14Z
Jyeatman
1
wikitext
text/x-wiki
<strong>Brain Development & Education Lab Wiki</strong>
Consult the [//meta.wikimedia.org/wiki/Help:Contents User's Guide] for information on using the wiki software.
== Getting started ==
* [//www.mediawiki.org/wiki/Special:MyLanguage/Manual:Configuration_settings Configuration settings list]
* [//www.mediawiki.org/wiki/Special:MyLanguage/Manual:FAQ MediaWiki FAQ]
* [https://lists.wikimedia.org/mailman/listinfo/mediawiki-announce MediaWiki release mailing list]
* [//www.mediawiki.org/wiki/Special:MyLanguage/Localisation#Translation_resources Localise MediaWiki for your language]
0qc7igvce7pzzc5hcvg7wymgenlh3ov
NFS
0
52
314
2016-07-27T23:02:51Z
Dstrodtman
5
Created page with "Mounting all of our scratch space using NFS will allow us to access each other's files as if they are on the local machine, reducing redundancy and maximizing our storage. Als..."
wikitext
text/x-wiki
Mounting all of our scratch space using NFS will allow us to access each other's files as if they are on the local machine, reducing redundancy and maximizing our storage. Also, this will make things much easier once the cluster is up and running.
==Install NFS Server==
To check if you've already installed the NFS server
dpkg -l | grep nfs-kernel-server
If not
sudo apt-get install nfs-kernel-server
==Make and Export Local Drives==
Our naming convention dictates you enter the name of your computer followed by a number if you have more than 1 spinny drive
sudo mkdir -p /export/<computername>
Add the following the /etc/fstab file to export drive at boot up.
/mnt/scratch /export/<computername> none bind 0 0
==This page is not yet complete==
cygh8pl3p8sk4odl87kc58yzai0bx75
Processing Data
0
45
269
2016-03-24T22:45:48Z
Dstrodtman
5
Dstrodtman moved page [[Processing Data]] to [[HCP Process]]
wikitext
text/x-wiki
#REDIRECT [[HCP Process]]
bu66zqt101o9qvfksb42nqoekb19pod
Psychophysics
0
18
135
2015-11-17T16:39:26Z
Jyeatman
1
Created page with "__TOC__ ==Attention: Spatial Cueing=="
wikitext
text/x-wiki
__TOC__
==Attention: Spatial Cueing==
2sms102qyr6be0mopqnggj3yuqafy9n
136
135
2015-11-17T19:41:29Z
Pdonnelly
2
/* Attention: Spatial Cueing */
wikitext
text/x-wiki
__TOC__
==Attention: Spatial Cueing==
Preparation:
Spatial Cueing testing takes place in CHDD in Room 370
# Ensure that Linux system is ready - with Chin Rest set-up
# Perform Vision Test
===Procedure===
n60x915x61rwnvf26pvrw0ntfxibyie
139
136
2015-11-18T22:53:06Z
Pdonnelly
2
/* Procedure */
wikitext
text/x-wiki
__TOC__
==Attention: Spatial Cueing==
Preparation:
Spatial Cueing testing takes place in CHDD in Room 370
# Ensure that Linux system is ready - with Chin Rest set-up
# Perform Vision Test
===Procedure===
1. Load MatLab. Navigate to the Code directory
cd /.../code
2. Explain the task. Below is an example script.
In this game you are going to play a game. In this game you are going to see a bunch of blurred circles with two stripes down the middle. This is what they look like:
[[File:Gabors1.jpg]]
As you can see all eight of the circles are the same except for one. For each task in the game you will need to find the one that is shifted like this* or like this*
*During demonstration, use your hand to show the tilt of the blurred lines, pointing them toward the corners of the monitor, avoiding descriptions of Left or Right, or Clockwise or Counter-Clockwise
After finding the one that is different, you will press either the Left Arrow key or the Right arrow key, depending on whether the tops of the blurred lines are pointing toward the Left or Right corner of the monitor.
In the last image, you can see that the lines are shifted so that they "point" at the upper right corner of the monitor. In this case, you would press the Right Arrow key.
Here's one where you would press the Left Arrow key:
[[File:Gabors2.jpg]]
auf0pm4jsk53jy6lvjbb4o2hgdizaje
144
139
2015-11-18T23:36:52Z
Pdonnelly
2
/* Procedure */
wikitext
text/x-wiki
__TOC__
==Attention: Spatial Cueing==
Preparation:
Spatial Cueing testing takes place in CHDD in Room 370
# Ensure that Linux system is ready - with Chin Rest set-up
# Perform Vision Test
===Procedure===
1. Load MatLab. Navigate to the Code directory
cd /.../code
2. Introduce the task
3. Practice Uncued task
4. Introduce the Cued
5. Practice the Cued
6. Introduce the Single Stimulus
7. Practice the Single Stimulus
8. Additional Practice
9. Run the Series
====Introduction====
In this game you are going to play a game. In this game you are going to see a bunch of blurred circles with two stripes down the middle. This is what they look like:
[[File:Gabors1.jpg]]
When you're playing the game, you're going to rest your chin here, and focus on the + in the middle of the screen. This is important because the game is to find the blurred circle that is off kilter.
As you can see all eight of the circles are the same except for one. For each task in the game you will need to find the one that is shifted like this* or like this*
*During demonstration, use your hand to show the tilt of the blurred lines, pointing them toward the corners of the monitor, avoiding descriptions of Left or Right, or Clockwise or Counter-Clockwise
After finding the one that is different, you will press either the Left Arrow key or the Right arrow key, depending on whether the tops of the blurred lines are pointing toward the Left or Right corner of the monitor.
In the last image, you can see that the lines are shifted so that they "point" at the upper right corner of the monitor. In this case, you would press the Right Arrow key.
Here's one where you would press the Left Arrow key:
[[File:Gabors2.jpg]]
It doesn't matter which side of the screen the blurred circle is on, so even though the different circle is on the right side of the screen, you would still press the Left Arrow since the tops of the blurred lines are pointing toward the Left Corner of the Monitor.
It may seem easy now, but the tricky part of the task is that it gets harder and harder to tell which way the lines are tilted, to the point that you have to pay extra close attention. Which one is different in this picture?
[[File:SmallTilt.jpg]]
Let's try a practice round!
Remember, keep your chin on the rest and focus hard on that + in the middle of the screen to give your eyes the best chance of detecting which circle is not like the others!
====Uncued Practice====
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the uncued version [0], the long version [y], and intialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
====Introduce the Cued task====
In the second task, we're going to give you a bit of help by showing a red dot next to the blurry circle that will be different. Everything else is the same - you will still press either the Left or Right arrow based on which way the lines are pointing - the only difference is that you won't have to figure out which one it is that is different.
This is what it will look like:
[[File:CueGabors3.jpg]]
Or this:
[[File:CueGaborsBigTilt2.jpg]]
Let's practice this version!
Remember to keep you chin on the rest, and focus on that +!
====Cued Practice====
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the cued version [1], the long version [y], and intialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
====Introduce the Single Stimulus====
The final task is even easier. Now there will only be one of the blurry circles on the screen, so you won't be distracted by having all 8. Same rules apply for this one - all you need to do is figure out which way those blurred lines are pointing.
Here's an example:
[[File:SingleStim3.jpg]]
I think you've gotten the hang of it, but lets practice this one too. This time we won't go for as long though.
====Practice Single Stimulus====
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the single stimulus version [2], the shorter version [n], and intialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
====Additional Practice====
This is a subjective decision following the above practice to add additional practice rounds based on how the subject is feeling and performing on each stimulus type.
Regardless, at minimum, the subject should perform the uncued task with an initial tilt level at 15 or below.
Run a few iterations of the practice rounds, at short duration.
g4chwfur5wvxmads2lrb4cul0o4cynd
145
144
2015-11-18T23:37:50Z
Pdonnelly
2
/* Procedure */
wikitext
text/x-wiki
__TOC__
==Attention: Spatial Cueing==
Preparation:
Spatial Cueing testing takes place in CHDD in Room 370
# Ensure that Linux system is ready - with Chin Rest set-up
# Perform Vision Test
===Procedure===
Load MatLab. Navigate to the Code directory
cd /.../code
====Introduction====
In this game you are going to play a game. In this game you are going to see a bunch of blurred circles with two stripes down the middle. This is what they look like:
[[File:Gabors1.jpg]]
When you're playing the game, you're going to rest your chin here, and focus on the + in the middle of the screen. This is important because the game is to find the blurred circle that is off kilter.
As you can see all eight of the circles are the same except for one. For each task in the game you will need to find the one that is shifted like this* or like this*
*During demonstration, use your hand to show the tilt of the blurred lines, pointing them toward the corners of the monitor, avoiding descriptions of Left or Right, or Clockwise or Counter-Clockwise
After finding the one that is different, you will press either the Left Arrow key or the Right arrow key, depending on whether the tops of the blurred lines are pointing toward the Left or Right corner of the monitor.
In the last image, you can see that the lines are shifted so that they "point" at the upper right corner of the monitor. In this case, you would press the Right Arrow key.
Here's one where you would press the Left Arrow key:
[[File:Gabors2.jpg]]
It doesn't matter which side of the screen the blurred circle is on, so even though the different circle is on the right side of the screen, you would still press the Left Arrow since the tops of the blurred lines are pointing toward the Left Corner of the Monitor.
It may seem easy now, but the tricky part of the task is that it gets harder and harder to tell which way the lines are tilted, to the point that you have to pay extra close attention. Which one is different in this picture?
[[File:SmallTilt.jpg]]
Let's try a practice round!
Remember, keep your chin on the rest and focus hard on that + in the middle of the screen to give your eyes the best chance of detecting which circle is not like the others!
====Uncued Practice====
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the uncued version [0], the long version [y], and intialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
====Introduce the Cued task====
In the second task, we're going to give you a bit of help by showing a red dot next to the blurry circle that will be different. Everything else is the same - you will still press either the Left or Right arrow based on which way the lines are pointing - the only difference is that you won't have to figure out which one it is that is different.
This is what it will look like:
[[File:CueGabors3.jpg]]
Or this:
[[File:CueGaborsBigTilt2.jpg]]
Let's practice this version!
Remember to keep you chin on the rest, and focus on that +!
====Cued Practice====
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the cued version [1], the long version [y], and intialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
====Introduce the Single Stimulus====
The final task is even easier. Now there will only be one of the blurry circles on the screen, so you won't be distracted by having all 8. Same rules apply for this one - all you need to do is figure out which way those blurred lines are pointing.
Here's an example:
[[File:SingleStim3.jpg]]
I think you've gotten the hang of it, but lets practice this one too. This time we won't go for as long though.
====Practice Single Stimulus====
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the single stimulus version [2], the shorter version [n], and intialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
====Additional Practice====
This is a subjective decision following the above practice to add additional practice rounds based on how the subject is feeling and performing on each stimulus type.
Regardless, at minimum, the subject should perform the uncued task with an initial tilt level at 15 or below.
Run a few iterations of the practice rounds, at short duration.
lgcll6wp2sq18swth9r0pfmjqq1ex8i
146
145
2015-11-18T23:45:12Z
Pdonnelly
2
/* Attention: Spatial Cueing */
wikitext
text/x-wiki
__TOC__
==Attention: Spatial Cueing==
Preparation:
Spatial Cueing testing takes place in CHDD in Room 370
# Ensure that Linux system is ready - with Chin Rest set-up
# Perform Vision Test
===Procedure===
Load MatLab. Navigate to the Code directory
cd /.../code
Ensure the script shows the correct monitor "CHDD"
====Introduction====
In this game you are going to play a game. In this game you are going to see a bunch of blurred circles with two stripes down the middle. This is what they look like:
[[File:Gabors1.jpg]]
When you're playing the game, you're going to rest your chin here, and focus on the + in the middle of the screen. This is important because the game is to find the blurred circle that is off kilter.
As you can see all eight of the circles are the same except for one. For each task in the game you will need to find the one that is shifted like this* or like this*
*During demonstration, use your hand to show the tilt of the blurred lines, pointing them toward the corners of the monitor, avoiding descriptions of Left or Right, or Clockwise or Counter-Clockwise
After finding the one that is different, you will press either the Left Arrow key or the Right arrow key, depending on whether the tops of the blurred lines are pointing toward the Left or Right corner of the monitor.
In the last image, you can see that the lines are shifted so that they "point" at the upper right corner of the monitor. In this case, you would press the Right Arrow key.
Here's one where you would press the Left Arrow key:
[[File:Gabors2.jpg]]
It doesn't matter which side of the screen the blurred circle is on, so even though the different circle is on the right side of the screen, you would still press the Left Arrow since the tops of the blurred lines are pointing toward the Left Corner of the Monitor.
It may seem easy now, but the tricky part of the task is that it gets harder and harder to tell which way the lines are tilted, to the point that you have to pay extra close attention. Which one is different in this picture?
[[File:SmallTilt.jpg]]
Let's try a practice round!
Remember, keep your chin on the rest and focus hard on that + in the middle of the screen to give your eyes the best chance of detecting which circle is not like the others!
====Uncued Practice====
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the uncued version [0], the long version [y], and intialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
====Introduce the Cued task====
In the second task, we're going to give you a bit of help by showing a red dot next to the blurry circle that will be different. Everything else is the same - you will still press either the Left or Right arrow based on which way the lines are pointing - the only difference is that you won't have to figure out which one it is that is different.
This is what it will look like:
[[File:CueGabors3.jpg]]
Or this:
[[File:CueGaborsBigTilt2.jpg]]
Let's practice this version!
Remember to keep you chin on the rest, and focus on that +!
====Cued Practice====
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the cued version [1], the long version [y], and intialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
====Introduce the Single Stimulus====
The final task is even easier. Now there will only be one of the blurry circles on the screen, so you won't be distracted by having all 8. Same rules apply for this one - all you need to do is figure out which way those blurred lines are pointing.
Here's an example:
[[File:SingleStim3.jpg]]
I think you've gotten the hang of it, but lets practice this one too. This time we won't go for as long though.
====Practice Single Stimulus====
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the single stimulus version [2], the shorter version [n], and intialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
====Additional Practice====
This is a subjective decision following the above practice to add additional practice rounds based on how the subject is feeling and performing on each stimulus type.
Regardless, at minimum, the subject should perform the uncued task with an initial tilt level at 15 or below.
Run a few iterations of the practice rounds, at short duration.
====Run the Series====
Determine in script the number of blocks.
Run CueingDL1.m
Follow embedded instruction, specifying that it is NOT practice [n] and set the threshold level at a level based on the performance during practice - somewhere in the range of 15-20 should be ideal.
g3fftq6xzn5kg95ia4dwc9xqtjx6bla
147
146
2015-11-18T23:46:25Z
Pdonnelly
2
/* Attention: Spatial Cueing */
wikitext
text/x-wiki
__TOC__
==Attention: Spatial Cueing==
Preparation:
Spatial Cueing testing takes place in CHDD in Room 370
# Ensure that Linux system is ready - with Chin Rest set-up
# Perform Vision Test
Load MatLab. Navigate to the Code directory
cd /.../code
Ensure the script shows the correct monitor "CHDD"
===Introduction===
In this game you are going to play a game. In this game you are going to see a bunch of blurred circles with two stripes down the middle. This is what they look like:
[[File:Gabors1.jpg]]
When you're playing the game, you're going to rest your chin here, and focus on the + in the middle of the screen. This is important because the game is to find the blurred circle that is off kilter.
As you can see all eight of the circles are the same except for one. For each task in the game you will need to find the one that is shifted like this* or like this*
*During demonstration, use your hand to show the tilt of the blurred lines, pointing them toward the corners of the monitor, avoiding descriptions of Left or Right, or Clockwise or Counter-Clockwise
After finding the one that is different, you will press either the Left Arrow key or the Right arrow key, depending on whether the tops of the blurred lines are pointing toward the Left or Right corner of the monitor.
In the last image, you can see that the lines are shifted so that they "point" at the upper right corner of the monitor. In this case, you would press the Right Arrow key.
Here's one where you would press the Left Arrow key:
[[File:Gabors2.jpg]]
It doesn't matter which side of the screen the blurred circle is on, so even though the different circle is on the right side of the screen, you would still press the Left Arrow since the tops of the blurred lines are pointing toward the Left Corner of the Monitor.
It may seem easy now, but the tricky part of the task is that it gets harder and harder to tell which way the lines are tilted, to the point that you have to pay extra close attention. Which one is different in this picture?
[[File:SmallTilt.jpg]]
Let's try a practice round!
Remember, keep your chin on the rest and focus hard on that + in the middle of the screen to give your eyes the best chance of detecting which circle is not like the others!
===Uncued Practice===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the uncued version [0], the long version [y], and intialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Introduce the Cued task===
In the second task, we're going to give you a bit of help by showing a red dot next to the blurry circle that will be different. Everything else is the same - you will still press either the Left or Right arrow based on which way the lines are pointing - the only difference is that you won't have to figure out which one it is that is different.
This is what it will look like:
[[File:CueGabors3.jpg]]
Or this:
[[File:CueGaborsBigTilt2.jpg]]
Let's practice this version!
Remember to keep you chin on the rest, and focus on that +!
===Cued Practice===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the cued version [1], the long version [y], and intialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Introduce the Single Stimulus===
The final task is even easier. Now there will only be one of the blurry circles on the screen, so you won't be distracted by having all 8. Same rules apply for this one - all you need to do is figure out which way those blurred lines are pointing.
Here's an example:
[[File:SingleStim3.jpg]]
I think you've gotten the hang of it, but lets practice this one too. This time we won't go for as long though.
===Practice Single Stimulus===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the single stimulus version [2], the shorter version [n], and intialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Additional Practice===
This is a subjective decision following the above practice to add additional practice rounds based on how the subject is feeling and performing on each stimulus type.
Regardless, at minimum, the subject should perform the uncued task with an initial tilt level at 15 or below.
Run a few iterations of the practice rounds, at short duration.
===Run the Series===
Determine in script the number of blocks.
Run CueingDL1.m
Follow embedded instruction, specifying that it is NOT practice [n] and set the threshold level at a level based on the performance during practice - somewhere in the range of 15-20 should be ideal.
6fft2evp7kpsr91ytjr5zhyjga4zm2g
148
147
2015-11-18T23:47:00Z
Pdonnelly
2
/* Attention: Spatial Cueing */
wikitext
text/x-wiki
__TOC__
==Attention: Spatial Cueing==
Spatial Cueing testing takes place in CHDD in Room 370
# Ensure that Linux system is ready - with Chin Rest set-up
# Perform Vision Test
Load MatLab. Navigate to the Code directory
cd /.../code
Ensure the script shows the correct monitor "CHDD"
===Introduction===
In this game you are going to play a game. In this game you are going to see a bunch of blurred circles with two stripes down the middle. This is what they look like:
[[File:Gabors1.jpg]]
When you're playing the game, you're going to rest your chin here, and focus on the + in the middle of the screen. This is important because the game is to find the blurred circle that is off kilter.
As you can see all eight of the circles are the same except for one. For each task in the game you will need to find the one that is shifted like this* or like this*
*During demonstration, use your hand to show the tilt of the blurred lines, pointing them toward the corners of the monitor, avoiding descriptions of Left or Right, or Clockwise or Counter-Clockwise
After finding the one that is different, you will press either the Left Arrow key or the Right arrow key, depending on whether the tops of the blurred lines are pointing toward the Left or Right corner of the monitor.
In the last image, you can see that the lines are shifted so that they "point" at the upper right corner of the monitor. In this case, you would press the Right Arrow key.
Here's one where you would press the Left Arrow key:
[[File:Gabors2.jpg]]
It doesn't matter which side of the screen the blurred circle is on, so even though the different circle is on the right side of the screen, you would still press the Left Arrow since the tops of the blurred lines are pointing toward the Left Corner of the Monitor.
It may seem easy now, but the tricky part of the task is that it gets harder and harder to tell which way the lines are tilted, to the point that you have to pay extra close attention. Which one is different in this picture?
[[File:SmallTilt.jpg]]
Let's try a practice round!
Remember, keep your chin on the rest and focus hard on that + in the middle of the screen to give your eyes the best chance of detecting which circle is not like the others!
===Uncued Practice===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the uncued version [0], the long version [y], and intialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Introduce the Cued task===
In the second task, we're going to give you a bit of help by showing a red dot next to the blurry circle that will be different. Everything else is the same - you will still press either the Left or Right arrow based on which way the lines are pointing - the only difference is that you won't have to figure out which one it is that is different.
This is what it will look like:
[[File:CueGabors3.jpg]]
Or this:
[[File:CueGaborsBigTilt2.jpg]]
Let's practice this version!
Remember to keep you chin on the rest, and focus on that +!
===Cued Practice===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the cued version [1], the long version [y], and intialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Introduce the Single Stimulus===
The final task is even easier. Now there will only be one of the blurry circles on the screen, so you won't be distracted by having all 8. Same rules apply for this one - all you need to do is figure out which way those blurred lines are pointing.
Here's an example:
[[File:SingleStim3.jpg]]
I think you've gotten the hang of it, but lets practice this one too. This time we won't go for as long though.
===Practice Single Stimulus===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the single stimulus version [2], the shorter version [n], and intialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Additional Practice===
This is a subjective decision following the above practice to add additional practice rounds based on how the subject is feeling and performing on each stimulus type.
Regardless, at minimum, the subject should perform the uncued task with an initial tilt level at 15 or below.
Run a few iterations of the practice rounds, at short duration.
===Run the Series===
Determine in script the number of blocks.
Run CueingDL1.m
Follow embedded instruction, specifying that it is NOT practice [n] and set the threshold level at a level based on the performance during practice - somewhere in the range of 15-20 should be ideal.
lyfsddaf4yofyrnocxffl4i27eknctw
150
148
2015-11-18T23:50:16Z
Pdonnelly
2
/* Introduction */
wikitext
text/x-wiki
__TOC__
==Attention: Spatial Cueing==
Spatial Cueing testing takes place in CHDD in Room 370
# Ensure that Linux system is ready - with Chin Rest set-up
# Perform Vision Test
Load MatLab. Navigate to the Code directory
cd /.../code
Ensure the script shows the correct monitor "CHDD"
===Introduction===
In this game you are going to play a game. In this game you are going to see a bunch of blurred circles with two stripes down the middle. This is what they look like:
[[File:Gabors1.jpg]]
When you're playing the game, you're going to rest your chin here, and focus on the + in the middle of the screen. This is important because the game is to find the blurred circle that is off kilter.
As you can see all eight of the circles are the same except for one. For each task in the game you will need to find the one that is shifted like this* or like this*
*During demonstration, use your hand to show the tilt of the blurred lines, pointing them toward the corners of the monitor, avoiding descriptions of Left or Right, or Clockwise or Counter-Clockwise
After finding the one that is different, you will press either the Left Arrow key or the Right arrow key, depending on whether the tops of the blurred lines are pointing toward the Left or Right corner of the monitor.
In the last image, you can see that the lines are shifted so that they "point" at the upper right corner of the monitor. In this case, you would press the Right Arrow key.
Here's one where you would press the Left Arrow key:
[[File:Gabors2.jpg]]
It doesn't matter which side of the screen the blurred circle is on, so even though the different circle is on the right side of the screen, you would still press the Left Arrow since the tops of the blurred lines are pointing toward the Left Corner of the Monitor.
It may seem easy now, but the tricky part of the task is that it gets harder and harder to tell which way the lines are tilted, to the point that you have to pay extra close attention. Which one is different in this picture?
[[File:SmallTilt.jpg]]
If you get it right, you will hear a short ding and see a green star in the middle of the screen over the +. This is what it looks like:
[[File:FeedbackStar.jpg | 200px | thumb]]
Let's try a practice round!
Remember, keep your chin on the rest and focus hard on that + in the middle of the screen to give your eyes the best chance of detecting which circle is not like the others!
===Uncued Practice===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the uncued version [0], the long version [y], and intialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Introduce the Cued task===
In the second task, we're going to give you a bit of help by showing a red dot next to the blurry circle that will be different. Everything else is the same - you will still press either the Left or Right arrow based on which way the lines are pointing - the only difference is that you won't have to figure out which one it is that is different.
This is what it will look like:
[[File:CueGabors3.jpg]]
Or this:
[[File:CueGaborsBigTilt2.jpg]]
Let's practice this version!
Remember to keep you chin on the rest, and focus on that +!
===Cued Practice===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the cued version [1], the long version [y], and intialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Introduce the Single Stimulus===
The final task is even easier. Now there will only be one of the blurry circles on the screen, so you won't be distracted by having all 8. Same rules apply for this one - all you need to do is figure out which way those blurred lines are pointing.
Here's an example:
[[File:SingleStim3.jpg]]
I think you've gotten the hang of it, but lets practice this one too. This time we won't go for as long though.
===Practice Single Stimulus===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the single stimulus version [2], the shorter version [n], and intialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Additional Practice===
This is a subjective decision following the above practice to add additional practice rounds based on how the subject is feeling and performing on each stimulus type.
Regardless, at minimum, the subject should perform the uncued task with an initial tilt level at 15 or below.
Run a few iterations of the practice rounds, at short duration.
===Run the Series===
Determine in script the number of blocks.
Run CueingDL1.m
Follow embedded instruction, specifying that it is NOT practice [n] and set the threshold level at a level based on the performance during practice - somewhere in the range of 15-20 should be ideal.
ddgv15fknnfa04gt18jo777s1pxhnu4
151
150
2015-11-18T23:50:53Z
Pdonnelly
2
/* Introduction */
wikitext
text/x-wiki
__TOC__
==Attention: Spatial Cueing==
Spatial Cueing testing takes place in CHDD in Room 370
# Ensure that Linux system is ready - with Chin Rest set-up
# Perform Vision Test
Load MatLab. Navigate to the Code directory
cd /.../code
Ensure the script shows the correct monitor "CHDD"
===Introduction===
In this game you are going to play a game. In this game you are going to see a bunch of blurred circles with two stripes down the middle. This is what they look like:
[[File:Gabors1.jpg]]
When you're playing the game, you're going to rest your chin here, and focus on the + in the middle of the screen. This is important because the game is to find the blurred circle that is off kilter.
As you can see all eight of the circles are the same except for one. For each task in the game you will need to find the one that is shifted like this* or like this*
*During demonstration, use your hand to show the tilt of the blurred lines, pointing them toward the corners of the monitor, avoiding descriptions of Left or Right, or Clockwise or Counter-Clockwise
After finding the one that is different, you will press either the Left Arrow key or the Right arrow key, depending on whether the tops of the blurred lines are pointing toward the Left or Right corner of the monitor.
In the last image, you can see that the lines are shifted so that they "point" at the upper right corner of the monitor. In this case, you would press the Right Arrow key.
Here's one where you would press the Left Arrow key:
[[File:Gabors2.jpg]]
It doesn't matter which side of the screen the blurred circle is on, so even though the different circle is on the right side of the screen, you would still press the Left Arrow since the tops of the blurred lines are pointing toward the Left Corner of the Monitor.
It may seem easy now, but the tricky part of the task is that it gets harder and harder to tell which way the lines are tilted, to the point that you have to pay extra close attention. Which one is different in this picture?
[[File:SmallTilt.jpg]]
If you get it right, you will hear a short ding and see a green star in the middle of the screen over the +. This is what it looks like:
[[File:FeedbackStar.jpg]]
Let's try a practice round!
Remember, keep your chin on the rest and focus hard on that + in the middle of the screen to give your eyes the best chance of detecting which circle is not like the others!
===Uncued Practice===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the uncued version [0], the long version [y], and intialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Introduce the Cued task===
In the second task, we're going to give you a bit of help by showing a red dot next to the blurry circle that will be different. Everything else is the same - you will still press either the Left or Right arrow based on which way the lines are pointing - the only difference is that you won't have to figure out which one it is that is different.
This is what it will look like:
[[File:CueGabors3.jpg]]
Or this:
[[File:CueGaborsBigTilt2.jpg]]
Let's practice this version!
Remember to keep you chin on the rest, and focus on that +!
===Cued Practice===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the cued version [1], the long version [y], and intialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Introduce the Single Stimulus===
The final task is even easier. Now there will only be one of the blurry circles on the screen, so you won't be distracted by having all 8. Same rules apply for this one - all you need to do is figure out which way those blurred lines are pointing.
Here's an example:
[[File:SingleStim3.jpg]]
I think you've gotten the hang of it, but lets practice this one too. This time we won't go for as long though.
===Practice Single Stimulus===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the single stimulus version [2], the shorter version [n], and intialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Additional Practice===
This is a subjective decision following the above practice to add additional practice rounds based on how the subject is feeling and performing on each stimulus type.
Regardless, at minimum, the subject should perform the uncued task with an initial tilt level at 15 or below.
Run a few iterations of the practice rounds, at short duration.
===Run the Series===
Determine in script the number of blocks.
Run CueingDL1.m
Follow embedded instruction, specifying that it is NOT practice [n] and set the threshold level at a level based on the performance during practice - somewhere in the range of 15-20 should be ideal.
q06uzxekisy8ndwqa149uc2t0d5gax7
152
151
2015-11-18T23:51:51Z
Pdonnelly
2
/* Introduction */
wikitext
text/x-wiki
__TOC__
==Attention: Spatial Cueing==
Spatial Cueing testing takes place in CHDD in Room 370
# Ensure that Linux system is ready - with Chin Rest set-up
# Perform Vision Test
Load MatLab. Navigate to the Code directory
cd /.../code
Ensure the script shows the correct monitor "CHDD"
===Introduction===
In this game you are going to play a game. In this game you are going to see a bunch of blurred circles with two stripes down the middle. This is what they look like:
[[File:Gabors1.jpg]]
When you're playing the game, you're going to rest your chin here, and focus on the + in the middle of the screen. This is important because the game is to find the blurred circle that is off kilter.
As you can see all eight of the circles are the same except for one. For each task in the game you will need to find the one that is shifted like this* or like this*
*During demonstration, use your hand to show the tilt of the blurred lines, pointing them toward the corners of the monitor, avoiding descriptions of Left or Right, or Clockwise or Counter-Clockwise
After finding the one that is different, you will press either the Left Arrow key or the Right arrow key, depending on whether the tops of the blurred lines are pointing toward the Left or Right corner of the monitor.
In the last image, you can see that the lines are shifted so that they "point" at the upper right corner of the monitor. In this case, you would press the Right Arrow key.
Here's one where you would press the Left Arrow key:
[[File:Gabors2.jpg]]
It doesn't matter which side of the screen the blurred circle is on, so even though the different circle is on the right side of the screen, you would still press the Left Arrow since the tops of the blurred lines are pointing toward the Left Corner of the Monitor.
It may seem easy now, but the tricky part of the task is that it gets harder and harder to tell which way the lines are tilted, to the point that you have to pay extra close attention. Which one is different in this picture?
[[File:SmallTilt.jpg]]
If you get it right, you will hear a short ding and see a green star in the middle of the screen over the +. This is what it looks like:
[[File:FeedbackStar.jpg]]
If you get it wrong, you won't hear anything, but you will see a red + in the middle of the screen.
Let's try a practice round!
Remember, keep your chin on the rest and focus hard on that + in the middle of the screen to give your eyes the best chance of detecting which circle is not like the others!
===Uncued Practice===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the uncued version [0], the long version [y], and intialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Introduce the Cued task===
In the second task, we're going to give you a bit of help by showing a red dot next to the blurry circle that will be different. Everything else is the same - you will still press either the Left or Right arrow based on which way the lines are pointing - the only difference is that you won't have to figure out which one it is that is different.
This is what it will look like:
[[File:CueGabors3.jpg]]
Or this:
[[File:CueGaborsBigTilt2.jpg]]
Let's practice this version!
Remember to keep you chin on the rest, and focus on that +!
===Cued Practice===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the cued version [1], the long version [y], and intialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Introduce the Single Stimulus===
The final task is even easier. Now there will only be one of the blurry circles on the screen, so you won't be distracted by having all 8. Same rules apply for this one - all you need to do is figure out which way those blurred lines are pointing.
Here's an example:
[[File:SingleStim3.jpg]]
I think you've gotten the hang of it, but lets practice this one too. This time we won't go for as long though.
===Practice Single Stimulus===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the single stimulus version [2], the shorter version [n], and intialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Additional Practice===
This is a subjective decision following the above practice to add additional practice rounds based on how the subject is feeling and performing on each stimulus type.
Regardless, at minimum, the subject should perform the uncued task with an initial tilt level at 15 or below.
Run a few iterations of the practice rounds, at short duration.
===Run the Series===
Determine in script the number of blocks.
Run CueingDL1.m
Follow embedded instruction, specifying that it is NOT practice [n] and set the threshold level at a level based on the performance during practice - somewhere in the range of 15-20 should be ideal.
4wc7ine24w0jjywp6sv9zxtnqxxay2g
153
152
2015-11-18T23:54:18Z
Pdonnelly
2
/* Introduction */
wikitext
text/x-wiki
__TOC__
==Attention: Spatial Cueing==
Spatial Cueing testing takes place in CHDD in Room 370
# Ensure that Linux system is ready - with Chin Rest set-up
# Perform Vision Test
Load MatLab. Navigate to the Code directory
cd /.../code
Ensure the script shows the correct monitor "CHDD"
===Introduction===
In this task you are going to play a game. In this game you are going to see a bunch of blurred circles with two stripes down the middle. This is what they look like:
[[File:Gabors1.jpg]]
When you're playing the game, you're going to rest your chin here, and focus on the + in the middle of the screen. This is important because the game is to find the blurred circle that is off kilter.
As you can see all eight of the circles are the same except for one. For each task in the game you will need to find the one that is shifted like this* or like this*
*During demonstration, use your hand to show the tilt of the blurred lines, pointing them toward the corners of the monitor, avoiding descriptions of Left or Right, or Clockwise or Counter-Clockwise
After finding the one that is different, you will press either the Left Arrow key or the Right arrow key, depending on whether the tops of the blurred lines are pointing toward the Left or Right corner of the monitor.
In the last image, you can see that the lines are shifted so that they "point" at the upper right corner of the monitor. In this case, you would press the Right Arrow key.
Here's one where you would press the Left Arrow key:
[[File:Gabors2.jpg]]
It doesn't matter which side of the screen the blurred circle is on, so even though the different circle is on the right side of the screen, you would still press the Left Arrow since the tops of the blurred lines are pointing toward the Left Corner of the Monitor.
It may seem easy now, but the tricky part of the task is that it gets harder and harder to tell which way the lines are tilted, to the point that you have to pay extra close attention. Which one is different in this picture?
[[File:SmallTilt.jpg]]
If you get it right, you will hear a short ding and see a green star in the middle of the screen over the +. This is what it looks like:
[[File:FeedbackStar.jpg]]
If you get it wrong, you won't hear anything, but you will see a red + in the middle of the screen.
Let's try a practice round!
Remember, keep your chin on the rest and focus hard on that + in the middle of the screen to give your eyes the best chance of detecting which circle is not like the others!
===Uncued Practice===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the uncued version [0], the long version [y], and intialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Introduce the Cued task===
In the second task, we're going to give you a bit of help by showing a red dot next to the blurry circle that will be different. Everything else is the same - you will still press either the Left or Right arrow based on which way the lines are pointing - the only difference is that you won't have to figure out which one it is that is different.
This is what it will look like:
[[File:CueGabors3.jpg]]
Or this:
[[File:CueGaborsBigTilt2.jpg]]
Let's practice this version!
Remember to keep you chin on the rest, and focus on that +!
===Cued Practice===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the cued version [1], the long version [y], and intialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Introduce the Single Stimulus===
The final task is even easier. Now there will only be one of the blurry circles on the screen, so you won't be distracted by having all 8. Same rules apply for this one - all you need to do is figure out which way those blurred lines are pointing.
Here's an example:
[[File:SingleStim3.jpg]]
I think you've gotten the hang of it, but lets practice this one too. This time we won't go for as long though.
===Practice Single Stimulus===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the single stimulus version [2], the shorter version [n], and intialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Additional Practice===
This is a subjective decision following the above practice to add additional practice rounds based on how the subject is feeling and performing on each stimulus type.
Regardless, at minimum, the subject should perform the uncued task with an initial tilt level at 15 or below.
Run a few iterations of the practice rounds, at short duration.
===Run the Series===
Determine in script the number of blocks.
Run CueingDL1.m
Follow embedded instruction, specifying that it is NOT practice [n] and set the threshold level at a level based on the performance during practice - somewhere in the range of 15-20 should be ideal.
468v16fbmx4statbm443cahr33iiw2w
154
153
2015-11-19T00:30:55Z
Jyeatman
1
wikitext
text/x-wiki
__TOC__
==Attention: Spatial Cueing==
Spatial Cueing testing takes place in CHDD in Room 370
# Ensure that Linux system is ready - with Chin Rest set-up
# Perform Vision Test
Load MatLab. Navigate to the Code directory
cd /.../code
Ensure the script shows the correct monitor "CHDD"
===Introduction===
In this task you are going to play a game. In this game you are going to see a bunch of blurred circles with two stripes down the middle. This is what they look like:
[[File:Gabors1.jpg|300px]]
When you're playing the game, you're going to rest your chin here, and focus on the + in the middle of the screen. This is important because the game is to find the blurred circle that is tilted to one side.
As you can see all eight of the circles are the same except for one. For each task in the game you will need to find the one that is shifted like this* or like this*
*During demonstration, use your hand to show the tilt of the blurred lines, pointing them toward the corners of the monitor, avoiding descriptions of Left or Right, or Clockwise or Counter-Clockwise
After finding the one that is different, you will press either the Left Arrow key or the Right arrow key, depending on whether the tops of the blurred lines are pointing toward the Left or Right corner of the monitor.
In the last image, you can see that the lines are shifted so that they "point" at the upper right corner of the monitor. In this case, you would press the Right Arrow key.
Here's one where you would press the Left Arrow key:
[[File:Gabors2.jpg]]
It doesn't matter which side of the screen the blurred circle is on, so even though the different circle is on the right side of the screen, you would still press the Left Arrow since the tops of the blurred lines are pointing toward the Left Corner of the Monitor.
It may seem easy now, but the tricky part of the task is that it gets harder and harder to tell which way the lines are tilted, to the point that you have to pay extra close attention. Which one is different in this picture?
[[File:SmallTilt.jpg]]
If you get it right, you will hear a short ding and see a green star in the middle of the screen over the +. This is what it looks like:
[[File:FeedbackStar.jpg]]
If you get it wrong, you won't hear anything, but you will see a red + in the middle of the screen.
Let's try a practice round!
Remember, keep your chin on the rest and focus hard on that + in the middle of the screen to give your eyes the best chance of detecting which circle is not like the others!
===Uncued Practice===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the uncued version [0], the long version [y], and intialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Introduce the Cued task===
In the second task, we're going to give you a bit of help by showing a red dot next to the blurry circle that will be different. Everything else is the same - you will still press either the Left or Right arrow based on which way the lines are pointing - the only difference is that you won't have to figure out which one it is that is different.
This is what it will look like:
[[File:CueGabors3.jpg]]
Or this:
[[File:CueGaborsBigTilt2.jpg]]
Let's practice this version!
Remember to keep you chin on the rest, and focus on that +!
===Cued Practice===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the cued version [1], the long version [y], and intialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Introduce the Single Stimulus===
The final task is even easier. Now there will only be one of the blurry circles on the screen, so you won't be distracted by having all 8. Same rules apply for this one - all you need to do is figure out which way those blurred lines are pointing.
Here's an example:
[[File:SingleStim3.jpg]]
I think you've gotten the hang of it, but lets practice this one too. This time we won't go for as long though.
===Practice Single Stimulus===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the single stimulus version [2], the shorter version [n], and intialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Additional Practice===
This is a subjective decision following the above practice to add additional practice rounds based on how the subject is feeling and performing on each stimulus type.
Regardless, at minimum, the subject should perform the uncued task with an initial tilt level at 15 or below.
Run a few iterations of the practice rounds, at short duration.
===Run the Series===
Determine in script the number of blocks.
Run CueingDL1.m
Follow embedded instruction, specifying that it is NOT practice [n] and set the threshold level at a level based on the performance during practice - somewhere in the range of 15-20 should be ideal.
jorzgy8v8lxzaxpx566u6gsmo8qbmhy
155
154
2015-11-19T00:31:18Z
Jyeatman
1
/* Introduction */
wikitext
text/x-wiki
__TOC__
==Attention: Spatial Cueing==
Spatial Cueing testing takes place in CHDD in Room 370
# Ensure that Linux system is ready - with Chin Rest set-up
# Perform Vision Test
Load MatLab. Navigate to the Code directory
cd /.../code
Ensure the script shows the correct monitor "CHDD"
===Introduction===
In this task you are going to play a game. In this game you are going to see a bunch of blurred circles with two stripes down the middle. This is what they look like:
[[File:Gabors1.jpg|800px]]
When you're playing the game, you're going to rest your chin here, and focus on the + in the middle of the screen. This is important because the game is to find the blurred circle that is tilted to one side.
As you can see all eight of the circles are the same except for one. For each task in the game you will need to find the one that is shifted like this* or like this*
*During demonstration, use your hand to show the tilt of the blurred lines, pointing them toward the corners of the monitor, avoiding descriptions of Left or Right, or Clockwise or Counter-Clockwise
After finding the one that is different, you will press either the Left Arrow key or the Right arrow key, depending on whether the tops of the blurred lines are pointing toward the Left or Right corner of the monitor.
In the last image, you can see that the lines are shifted so that they "point" at the upper right corner of the monitor. In this case, you would press the Right Arrow key.
Here's one where you would press the Left Arrow key:
[[File:Gabors2.jpg]]
It doesn't matter which side of the screen the blurred circle is on, so even though the different circle is on the right side of the screen, you would still press the Left Arrow since the tops of the blurred lines are pointing toward the Left Corner of the Monitor.
It may seem easy now, but the tricky part of the task is that it gets harder and harder to tell which way the lines are tilted, to the point that you have to pay extra close attention. Which one is different in this picture?
[[File:SmallTilt.jpg]]
If you get it right, you will hear a short ding and see a green star in the middle of the screen over the +. This is what it looks like:
[[File:FeedbackStar.jpg]]
If you get it wrong, you won't hear anything, but you will see a red + in the middle of the screen.
Let's try a practice round!
Remember, keep your chin on the rest and focus hard on that + in the middle of the screen to give your eyes the best chance of detecting which circle is not like the others!
===Uncued Practice===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the uncued version [0], the long version [y], and intialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Introduce the Cued task===
In the second task, we're going to give you a bit of help by showing a red dot next to the blurry circle that will be different. Everything else is the same - you will still press either the Left or Right arrow based on which way the lines are pointing - the only difference is that you won't have to figure out which one it is that is different.
This is what it will look like:
[[File:CueGabors3.jpg]]
Or this:
[[File:CueGaborsBigTilt2.jpg]]
Let's practice this version!
Remember to keep you chin on the rest, and focus on that +!
===Cued Practice===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the cued version [1], the long version [y], and intialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Introduce the Single Stimulus===
The final task is even easier. Now there will only be one of the blurry circles on the screen, so you won't be distracted by having all 8. Same rules apply for this one - all you need to do is figure out which way those blurred lines are pointing.
Here's an example:
[[File:SingleStim3.jpg]]
I think you've gotten the hang of it, but lets practice this one too. This time we won't go for as long though.
===Practice Single Stimulus===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the single stimulus version [2], the shorter version [n], and intialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Additional Practice===
This is a subjective decision following the above practice to add additional practice rounds based on how the subject is feeling and performing on each stimulus type.
Regardless, at minimum, the subject should perform the uncued task with an initial tilt level at 15 or below.
Run a few iterations of the practice rounds, at short duration.
===Run the Series===
Determine in script the number of blocks.
Run CueingDL1.m
Follow embedded instruction, specifying that it is NOT practice [n] and set the threshold level at a level based on the performance during practice - somewhere in the range of 15-20 should be ideal.
naxpsgubdd40mb7jl4dmzfbte2dmcjp
156
155
2015-11-19T00:42:43Z
Pdonnelly
2
/* Attention: Spatial Cueing */
wikitext
text/x-wiki
__TOC__
==Attention: Spatial Cueing==
Spatial Cueing testing takes place in CHDD in Room 370
# Ensure that Linux system is ready - with Chin Rest set-up
# Perform Vision Test
Load MatLab. Navigate to the Code directory
cd /.../code
Ensure the script shows the correct monitor "CHDD"
===Introduction===
In this task you are going to play a game. In this game you are going to see a bunch of blurred circles with two stripes down the middle. This is what they look like:
[File:Gabors1.jpg|800px]]
When you're playing the game, you're going to rest your chin here, and focus on the + in the middle of the screen. This is important because the game is to find the blurred circle that is tilted to one side.
As you can see all eight of the circles are the same except for one. For each task in the game you will need to find the one that is shifted like this* or like this*
*During demonstration, use your hand to show the tilt of the blurred lines, pointing them toward the corners of the monitor, avoiding descriptions of Left or Right, or Clockwise or Counter-Clockwise
After finding the one that is different, you will press either the Left Arrow key or the Right arrow key, depending on whether the tops of the blurred lines are pointing toward the Left or Right corner of the monitor.
In the last image, you can see that the lines are shifted so that they "point" at the upper right corner of the monitor. In this case, you would press the Right Arrow key.
Here's one where you would press the Left Arrow key:
[[File:Gabors2.jpg|800px]]
It doesn't matter which side of the screen the blurred circle is on, so even though the different circle is on the right side of the screen, you would still press the Left Arrow since the tops of the blurred lines are pointing toward the Left Corner of the Monitor.
It may seem easy now, but the tricky part of the task is that it gets harder and harder to tell which way the lines are tilted, to the point that you have to pay extra close attention. Which one is different in this picture?
[[File:SmallTilt.jpg|800px]]
If you get it right, you will hear a short ding and see a green star in the middle of the screen over the +. This is what it looks like:
[[File:FeedbackStar.jpg|800px]]
If you get it wrong, you won't hear anything, but you will see a red + in the middle of the screen.
Let's try a practice round!
Remember, keep your chin on the rest and focus hard on that + in the middle of the screen to give your eyes the best chance of detecting which circle is not like the others!
===Uncued Practice===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the uncued version [0], the long version [y], and intialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Introduce the Cued task===
In the second task, we're going to give you a bit of help by showing a red dot next to the blurry circle that will be different. Everything else is the same - you will still press either the Left or Right arrow based on which way the lines are pointing - the only difference is that you won't have to figure out which one it is that is different.
This is what it will look like:
[[File:CueGabors3.jpg|800px]]
Or this:
[[File:CueGaborsBigTilt2.jpg|800px]]
Let's practice this version!
Remember to keep you chin on the rest, and focus on that +!
===Cued Practice===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the cued version [1], the long version [y], and intialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Introduce the Single Stimulus===
The final task is even easier. Now there will only be one of the blurry circles on the screen, so you won't be distracted by having all 8. Same rules apply for this one - all you need to do is figure out which way those blurred lines are pointing.
Here's an example:
[[File:SingleStim3.jpg|800px]]
I think you've gotten the hang of it, but lets practice this one too. This time we won't go for as long though.
===Practice Single Stimulus===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the single stimulus version [2], the shorter version [n], and intialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Additional Practice===
This is a subjective decision following the above practice to add additional practice rounds based on how the subject is feeling and performing on each stimulus type.
Regardless, at minimum, the subject should perform the uncued task with an initial tilt level at 15 or below.
Run a few iterations of the practice rounds, at short duration.
===Run the Series===
Determine in script the number of blocks.
Run CueingDL1.m
Follow embedded instruction, specifying that it is NOT practice [n] and set the threshold level at a level based on the performance during practice - somewhere in the range of 15-20 should be ideal.
b4uctip5zuuqw2sd3wr1e44b6s0lzjz
157
156
2015-11-19T00:43:23Z
Pdonnelly
2
/* Attention: Spatial Cueing */
wikitext
text/x-wiki
__TOC__
==Attention: Spatial Cueing==
Spatial Cueing testing takes place in CHDD in Room 370
# Ensure that Linux system is ready - with Chin Rest set-up
# Perform Vision Test
Load MatLab. Navigate to the Code directory
cd /.../code
Ensure the script shows the correct monitor "CHDD"
===Introduction===
In this task you are going to play a game. In this game you are going to see a bunch of blurred circles with two stripes down the middle. This is what they look like:
[File:Gabors1.jpg|800px]]
When you're playing the game, you're going to rest your chin here, and focus on the + in the middle of the screen. This is important because the game is to find the blurred circle that is tilted to one side.
As you can see all eight of the circles are the same except for one. For each task in the game you will need to find the one that is shifted like this* or like this*
*During demonstration, use your hand to show the tilt of the blurred lines, pointing them toward the corners of the monitor, avoiding descriptions of Left or Right, or Clockwise or Counter-Clockwise
After finding the one that is different, you will press either the Left Arrow key or the Right arrow key, depending on whether the tops of the blurred lines are pointing toward the Left or Right corner of the monitor.
In the last image, you can see that the lines are shifted so that they "point" at the upper right corner of the monitor. In this case, you would press the Right Arrow key.
Here's one where you would press the Left Arrow key:
[[File:Gabors2.jpg|800px]]
It doesn't matter which side of the screen the blurred circle is on, so even though the different circle is on the right side of the screen, you would still press the Left Arrow since the tops of the blurred lines are pointing toward the Left Corner of the Monitor.
It may seem easy now, but the tricky part of the task is that it gets harder and harder to tell which way the lines are tilted, to the point that you have to pay extra close attention. Which one is different in this picture?
[[File:SmallTilt.jpg|800px]]
If you get it right, you will hear a short ding and see a green star in the middle of the screen over the +. This is what it looks like:
[[File:FeedbackStar.jpg|800px]]
If you get it wrong, you won't hear anything, but you will see a red + in the middle of the screen.
Let's try a practice round!
Remember, keep your chin on the rest and focus hard on that + in the middle of the screen to give your eyes the best chance of detecting which circle is not like the others!
===Uncued Practice===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the uncued version [0], the long version [y], and intialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Introduce the Cued task===
In the second task, we're going to give you a bit of help by showing a red dot next to the blurry circle that will be different. Everything else is the same - you will still press either the Left or Right arrow based on which way the lines are pointing - the only difference is that you won't have to figure out which one it is that is different.
This is what it will look like:
[[File:CueGabors3.jpg|800px]]
Or this:
[[File:CueGaborsBigTilt2.jpg|800px]]
Let's practice this version!
Remember to keep you chin on the rest, and focus on that +!
===Cued Practice===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the cued version [1], the long version [y], and intialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Introduce the Single Stimulus===
The final task is even easier. Now there will only be one of the blurry circles on the screen, so you won't be distracted by having all 8. Same rules apply for this one - all you need to do is figure out which way those blurred lines are pointing.
Here's an example:
[[File:SingleStim3.jpg|800px]]
I think you've gotten the hang of it, but lets practice this one too. This time we won't go for as long though.
===Practice Single Stimulus===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the single stimulus version [2], the shorter version [n], and intialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Additional Practice===
This is a subjective decision following the above practice to add additional practice rounds based on how the subject is feeling and performing on each stimulus type.
Regardless, at minimum, the subject should perform the uncued task with an initial tilt level at 15 or below.
Run a few iterations of the practice rounds, at short duration.
===Run the Series===
Determine in script the number of blocks.
Run CueingDL1.m
Follow embedded instruction, specifying that it is NOT practice [n] and set the threshold level at a level based on the performance during practice - somewhere in the range of 15-20 should be ideal.
7xcp25yb061n4a5xnoehvhbr0bg0tqm
158
157
2015-11-19T17:41:07Z
Pdonnelly
2
/* Introduction */
wikitext
text/x-wiki
__TOC__
==Attention: Spatial Cueing==
Spatial Cueing testing takes place in CHDD in Room 370
# Ensure that Linux system is ready - with Chin Rest set-up
# Perform Vision Test
Load MatLab. Navigate to the Code directory
cd /.../code
Ensure the script shows the correct monitor "CHDD"
===Introduction===
In this task you are going to play a game. In this game you are going to see a bunch of blurred circles with two stripes down the middle. This is what they look like:
[[File:Gabors1.jpg|800px]]
When you're playing the game, you're going to rest your chin here, and focus on the + in the middle of the screen. This is important because the game is to find the blurred circle that is tilted to one side.
As you can see all eight of the circles are the same except for one. For each task in the game you will need to find the one that is shifted like this* or like this*
*During demonstration, use your hand to show the tilt of the blurred lines, pointing them toward the corners of the monitor, avoiding descriptions of Left or Right, or Clockwise or Counter-Clockwise
After finding the one that is different, you will press either the Left Arrow key or the Right arrow key, depending on whether the tops of the blurred lines are pointing toward the Left or Right corner of the monitor.
In the last image, you can see that the lines are shifted so that they "point" at the upper right corner of the monitor. In this case, you would press the Right Arrow key.
Here's one where you would press the Left Arrow key:
[[File:Gabors2.jpg|800px]]
It doesn't matter which side of the screen the blurred circle is on, so even though the different circle is on the right side of the screen, you would still press the Left Arrow since the tops of the blurred lines are pointing toward the Left Corner of the Monitor.
It may seem easy now, but the tricky part of the task is that it gets harder and harder to tell which way the lines are tilted, to the point that you have to pay extra close attention. Which one is different in this picture?
[[File:SmallTilt.jpg|800px]]
If you get it right, you will hear a short ding and see a green star in the middle of the screen over the +. This is what it looks like:
[[File:FeedbackStar.jpg|800px]]
If you get it wrong, you won't hear anything, but you will see a red + in the middle of the screen.
Let's try a practice round!
Remember, keep your chin on the rest and focus hard on that + in the middle of the screen to give your eyes the best chance of detecting which circle is not like the others!
===Uncued Practice===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the uncued version [0], the long version [y], and intialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Introduce the Cued task===
In the second task, we're going to give you a bit of help by showing a red dot next to the blurry circle that will be different. Everything else is the same - you will still press either the Left or Right arrow based on which way the lines are pointing - the only difference is that you won't have to figure out which one it is that is different.
This is what it will look like:
[[File:CueGabors3.jpg|800px]]
Or this:
[[File:CueGaborsBigTilt2.jpg|800px]]
Let's practice this version!
Remember to keep you chin on the rest, and focus on that +!
===Cued Practice===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the cued version [1], the long version [y], and intialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Introduce the Single Stimulus===
The final task is even easier. Now there will only be one of the blurry circles on the screen, so you won't be distracted by having all 8. Same rules apply for this one - all you need to do is figure out which way those blurred lines are pointing.
Here's an example:
[[File:SingleStim3.jpg|800px]]
I think you've gotten the hang of it, but lets practice this one too. This time we won't go for as long though.
===Practice Single Stimulus===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the single stimulus version [2], the shorter version [n], and intialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Additional Practice===
This is a subjective decision following the above practice to add additional practice rounds based on how the subject is feeling and performing on each stimulus type.
Regardless, at minimum, the subject should perform the uncued task with an initial tilt level at 15 or below.
Run a few iterations of the practice rounds, at short duration.
===Run the Series===
Determine in script the number of blocks.
Run CueingDL1.m
Follow embedded instruction, specifying that it is NOT practice [n] and set the threshold level at a level based on the performance during practice - somewhere in the range of 15-20 should be ideal.
2lwzruym7szbm9clcj0f1z5aq5ui7w7
160
158
2015-11-19T17:48:30Z
Pdonnelly
2
/* Attention: Spatial Cueing */
wikitext
text/x-wiki
__TOC__
==Attention: Spatial Cueing==
Spatial Cueing testing takes place in CHDD in Room 370
# Ensure that Linux system is ready - with Chin Rest set-up
# Perform Vision Test
Load MatLab. Navigate to the Code directory
cd /.../code
Ensure the script shows the correct monitor "CHDD"
===Introduction===
In this task you are going to play a game. In this game you are going to see a bunch of blurred circles with two stripes down the middle. This is what they look like:
[[File:Gabors1.jpg|800px]]
Here's a close-up so you can see that they are intentionally a blurry. Do you notice anything about the blurred circles?
[[File:GaborsCloseUp.jpg]]
When you're playing the game, you're going to rest your chin here, and focus on the + in the middle of the screen. This is important because the game is to find the blurred circle that is tilted to one side.
As you can see all eight of the circles are the same except for one. For each task in the game you will need to find the one that is shifted like this* or like this*
*During demonstration, use your hand to show the tilt of the blurred lines, pointing them toward the corners of the monitor, avoiding descriptions of Left or Right, or Clockwise or Counter-Clockwise
After finding the one that is different, you will press either the Left Arrow key or the Right arrow key, depending on whether the tops of the blurred lines are pointing toward the Left or Right corner of the monitor.
In the last image, you can see that the lines are shifted so that they "point" at the upper right corner of the monitor. In this case, you would press the Right Arrow key.
Here's one where you would press the Left Arrow key:
[[File:Gabors2.jpg|800px]]
It doesn't matter which side of the screen the blurred circle is on, so even though the different circle is on the right side of the screen, you would still press the Left Arrow since the tops of the blurred lines are pointing toward the Left Corner of the Monitor.
It may seem easy now, but the tricky part of the task is that it gets harder and harder to tell which way the lines are tilted, to the point that you have to pay extra close attention. Which one is different in this picture?
[[File:SmallTilt.jpg|800px]]
If you get it right, you will hear a short ding and see a green star in the middle of the screen over the +. This is what it looks like:
[[File:FeedbackStar.jpg|800px]]
If you get it wrong, you won't hear anything, but you will see a red + in the middle of the screen.
Let's try a practice round!
Remember, keep your chin on the rest and focus hard on that + in the middle of the screen to give your eyes the best chance of detecting which circle is not like the others!
===Uncued Practice===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the uncued version [0], the long version [y], and intialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Introduce the Cued task===
In the second task, we're going to give you a bit of help by showing a red dot next to the blurry circle that will be different. Everything else is the same - you will still press either the Left or Right arrow based on which way the lines are pointing - the only difference is that you won't have to figure out which one it is that is different.
This is what it will look like:
[[File:CueGabors3.jpg|800px]]
Or this:
[[File:CueGaborsBigTilt2.jpg|800px]]
Let's practice this version!
Remember to keep you chin on the rest, and focus on that +!
===Cued Practice===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the cued version [1], the long version [y], and intialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Introduce the Single Stimulus===
The final task is even easier. Now there will only be one of the blurry circles on the screen, so you won't be distracted by having all 8. Same rules apply for this one - all you need to do is figure out which way those blurred lines are pointing.
Here's an example:
[[File:SingleStim3.jpg|800px]]
I think you've gotten the hang of it, but lets practice this one too. This time we won't go for as long though.
===Practice Single Stimulus===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the single stimulus version [2], the shorter version [n], and intialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Additional Practice===
This is a subjective decision following the above practice to add additional practice rounds based on how the subject is feeling and performing on each stimulus type.
Regardless, at minimum, the subject should perform the uncued task with an initial tilt level at 15 or below.
Run a few iterations of the practice rounds, at short duration.
===Run the Series===
Determine in script the number of blocks.
Run CueingDL1.m
Follow embedded instruction, specifying that it is NOT practice [n] and set the threshold level at a level based on the performance during practice - somewhere in the range of 15-20 should be ideal.
huvewvs4g3zr58gsnz1qova4m5rvip1
167
160
2015-11-19T18:49:56Z
Pdonnelly
2
/* Introduce the Cued task */
wikitext
text/x-wiki
__TOC__
==Attention: Spatial Cueing==
Spatial Cueing testing takes place in CHDD in Room 370
# Ensure that Linux system is ready - with Chin Rest set-up
# Perform Vision Test
Load MatLab. Navigate to the Code directory
cd /.../code
Ensure the script shows the correct monitor "CHDD"
===Introduction===
In this task you are going to play a game. In this game you are going to see a bunch of blurred circles with two stripes down the middle. This is what they look like:
[[File:Gabors1.jpg|800px]]
Here's a close-up so you can see that they are intentionally a blurry. Do you notice anything about the blurred circles?
[[File:GaborsCloseUp.jpg]]
When you're playing the game, you're going to rest your chin here, and focus on the + in the middle of the screen. This is important because the game is to find the blurred circle that is tilted to one side.
As you can see all eight of the circles are the same except for one. For each task in the game you will need to find the one that is shifted like this* or like this*
*During demonstration, use your hand to show the tilt of the blurred lines, pointing them toward the corners of the monitor, avoiding descriptions of Left or Right, or Clockwise or Counter-Clockwise
After finding the one that is different, you will press either the Left Arrow key or the Right arrow key, depending on whether the tops of the blurred lines are pointing toward the Left or Right corner of the monitor.
In the last image, you can see that the lines are shifted so that they "point" at the upper right corner of the monitor. In this case, you would press the Right Arrow key.
Here's one where you would press the Left Arrow key:
[[File:Gabors2.jpg|800px]]
It doesn't matter which side of the screen the blurred circle is on, so even though the different circle is on the right side of the screen, you would still press the Left Arrow since the tops of the blurred lines are pointing toward the Left Corner of the Monitor.
It may seem easy now, but the tricky part of the task is that it gets harder and harder to tell which way the lines are tilted, to the point that you have to pay extra close attention. Which one is different in this picture?
[[File:SmallTilt.jpg|800px]]
If you get it right, you will hear a short ding and see a green star in the middle of the screen over the +. This is what it looks like:
[[File:FeedbackStar.jpg|800px]]
If you get it wrong, you won't hear anything, but you will see a red + in the middle of the screen.
Let's try a practice round!
Remember, keep your chin on the rest and focus hard on that + in the middle of the screen to give your eyes the best chance of detecting which circle is not like the others!
===Uncued Practice===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the uncued version [0], the long version [y], and intialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Introduce the Cued task===
In the second task, we're going to give you a bit of help by showing a red dot next to the blurry circle that will be different. Everything else is the same - you will still press either the Left or Right arrow based on which way the lines are pointing - the only difference is that you won't have to figure out which one it is that is different.
This is what it will look like:
[[File:CueGabor.jpg|800px]]
Or this:
[[File:CueGabor2.jpg|800px]]
Let's practice this version!
Remember to keep you chin on the rest, and focus on that +!
===Cued Practice===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the cued version [1], the long version [y], and intialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Introduce the Single Stimulus===
The final task is even easier. Now there will only be one of the blurry circles on the screen, so you won't be distracted by having all 8. Same rules apply for this one - all you need to do is figure out which way those blurred lines are pointing.
Here's an example:
[[File:SingleStim3.jpg|800px]]
I think you've gotten the hang of it, but lets practice this one too. This time we won't go for as long though.
===Practice Single Stimulus===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the single stimulus version [2], the shorter version [n], and intialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Additional Practice===
This is a subjective decision following the above practice to add additional practice rounds based on how the subject is feeling and performing on each stimulus type.
Regardless, at minimum, the subject should perform the uncued task with an initial tilt level at 15 or below.
Run a few iterations of the practice rounds, at short duration.
===Run the Series===
Determine in script the number of blocks.
Run CueingDL1.m
Follow embedded instruction, specifying that it is NOT practice [n] and set the threshold level at a level based on the performance during practice - somewhere in the range of 15-20 should be ideal.
js3fyuayp2gg02taowjc6o7sanfs9ut
168
167
2015-11-19T18:50:48Z
Pdonnelly
2
/* Introduce the Cued task */
wikitext
text/x-wiki
__TOC__
==Attention: Spatial Cueing==
Spatial Cueing testing takes place in CHDD in Room 370
# Ensure that Linux system is ready - with Chin Rest set-up
# Perform Vision Test
Load MatLab. Navigate to the Code directory
cd /.../code
Ensure the script shows the correct monitor "CHDD"
===Introduction===
In this task you are going to play a game. In this game you are going to see a bunch of blurred circles with two stripes down the middle. This is what they look like:
[[File:Gabors1.jpg|800px]]
Here's a close-up so you can see that they are intentionally a blurry. Do you notice anything about the blurred circles?
[[File:GaborsCloseUp.jpg]]
When you're playing the game, you're going to rest your chin here, and focus on the + in the middle of the screen. This is important because the game is to find the blurred circle that is tilted to one side.
As you can see all eight of the circles are the same except for one. For each task in the game you will need to find the one that is shifted like this* or like this*
*During demonstration, use your hand to show the tilt of the blurred lines, pointing them toward the corners of the monitor, avoiding descriptions of Left or Right, or Clockwise or Counter-Clockwise
After finding the one that is different, you will press either the Left Arrow key or the Right arrow key, depending on whether the tops of the blurred lines are pointing toward the Left or Right corner of the monitor.
In the last image, you can see that the lines are shifted so that they "point" at the upper right corner of the monitor. In this case, you would press the Right Arrow key.
Here's one where you would press the Left Arrow key:
[[File:Gabors2.jpg|800px]]
It doesn't matter which side of the screen the blurred circle is on, so even though the different circle is on the right side of the screen, you would still press the Left Arrow since the tops of the blurred lines are pointing toward the Left Corner of the Monitor.
It may seem easy now, but the tricky part of the task is that it gets harder and harder to tell which way the lines are tilted, to the point that you have to pay extra close attention. Which one is different in this picture?
[[File:SmallTilt.jpg|800px]]
If you get it right, you will hear a short ding and see a green star in the middle of the screen over the +. This is what it looks like:
[[File:FeedbackStar.jpg|800px]]
If you get it wrong, you won't hear anything, but you will see a red + in the middle of the screen.
Let's try a practice round!
Remember, keep your chin on the rest and focus hard on that + in the middle of the screen to give your eyes the best chance of detecting which circle is not like the others!
===Uncued Practice===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the uncued version [0], the long version [y], and intialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Introduce the Cued task===
In the second task, we're going to give you a bit of help by showing a red dot next to the blurry circle that will be different. Everything else is the same - you will still press either the Left or Right arrow based on which way the lines are pointing - the only difference is that you won't have to figure out which one it is that is different.
This is what it will look like:
[[File:CuedGabor.jpg|800px]]
Or this:
[[File:CuedGabor2.jpg|800px]]
Let's practice this version!
Remember to keep you chin on the rest, and focus on that +!
===Cued Practice===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the cued version [1], the long version [y], and intialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Introduce the Single Stimulus===
The final task is even easier. Now there will only be one of the blurry circles on the screen, so you won't be distracted by having all 8. Same rules apply for this one - all you need to do is figure out which way those blurred lines are pointing.
Here's an example:
[[File:SingleStim3.jpg|800px]]
I think you've gotten the hang of it, but lets practice this one too. This time we won't go for as long though.
===Practice Single Stimulus===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the single stimulus version [2], the shorter version [n], and intialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Additional Practice===
This is a subjective decision following the above practice to add additional practice rounds based on how the subject is feeling and performing on each stimulus type.
Regardless, at minimum, the subject should perform the uncued task with an initial tilt level at 15 or below.
Run a few iterations of the practice rounds, at short duration.
===Run the Series===
Determine in script the number of blocks.
Run CueingDL1.m
Follow embedded instruction, specifying that it is NOT practice [n] and set the threshold level at a level based on the performance during practice - somewhere in the range of 15-20 should be ideal.
86qcr5kllboqe6qi78h4vea0sllsj7o
170
168
2015-11-19T18:54:38Z
Pdonnelly
2
/* Introduction */
wikitext
text/x-wiki
__TOC__
==Attention: Spatial Cueing==
Spatial Cueing testing takes place in CHDD in Room 370
# Ensure that Linux system is ready - with Chin Rest set-up
# Perform Vision Test
Load MatLab. Navigate to the Code directory
cd /.../code
Ensure the script shows the correct monitor "CHDD"
===Introduction===
In this task you are going to play a game. In this game you are going to see a bunch of blurred circles with two stripes down the middle. This is what they look like:
[[File:Gabors1.jpg|800px]]
Here's a close-up so you can see that they are intentionally a blurry. Do you notice anything about the blurred circles?
[[File:GaborsCloseUp.jpg]]
When you're playing the game, you're going to rest your chin here, and focus on the + in the middle of the screen. This is important because the game is to find the blurred circle that is tilted to one side.
As you can see all eight of the circles are the same except for one. For each task in the game you will need to find the one that is shifted like this* or like this*
*During demonstration, use your hand to show the tilt of the blurred lines, pointing them toward the corners of the monitor, avoiding descriptions of Left or Right, or Clockwise or Counter-Clockwise
After finding the one that is different, you will press either the Left Arrow key or the Right arrow key, depending on whether the tops of the blurred lines are pointing toward the Left or Right corner of the monitor.
In the last image, you can see that the lines are shifted so that they "point" at the upper right corner of the monitor. In this case, you would press the Right Arrow key.
Here's one where you would press the Left Arrow key:
[[File:Gabors2.jpg|800px]]
It doesn't matter which side of the screen the blurred circle is on, so even though the different circle is on the right side of the screen, you would still press the Left Arrow since the tops of the blurred lines are pointing toward the Left Corner of the Monitor.
It may seem easy now, but the tricky part of the task is that it gets harder and harder to tell which way the lines are tilted, to the point that you have to pay extra close attention. Which one is different in this picture?
[[File:SmallTilt.jpg|800px]]
If you get it right, you will hear a short ding and see a green star in the middle of the screen over the +. This is what it looks like:
[[File:FeedbackStar.jpg|800px]]
If you get it wrong, you won't hear anything, but you will see a red + in the middle of the screen.
[[File:RedCross.jpg|800px]]
Let's try a practice round!
Remember, keep your chin on the rest and focus hard on that + in the middle of the screen to give your eyes the best chance of detecting which circle is not like the others!
===Uncued Practice===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the uncued version [0], the long version [y], and intialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Introduce the Cued task===
In the second task, we're going to give you a bit of help by showing a red dot next to the blurry circle that will be different. Everything else is the same - you will still press either the Left or Right arrow based on which way the lines are pointing - the only difference is that you won't have to figure out which one it is that is different.
This is what it will look like:
[[File:CuedGabor.jpg|800px]]
Or this:
[[File:CuedGabor2.jpg|800px]]
Let's practice this version!
Remember to keep you chin on the rest, and focus on that +!
===Cued Practice===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the cued version [1], the long version [y], and intialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Introduce the Single Stimulus===
The final task is even easier. Now there will only be one of the blurry circles on the screen, so you won't be distracted by having all 8. Same rules apply for this one - all you need to do is figure out which way those blurred lines are pointing.
Here's an example:
[[File:SingleStim3.jpg|800px]]
I think you've gotten the hang of it, but lets practice this one too. This time we won't go for as long though.
===Practice Single Stimulus===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the single stimulus version [2], the shorter version [n], and intialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Additional Practice===
This is a subjective decision following the above practice to add additional practice rounds based on how the subject is feeling and performing on each stimulus type.
Regardless, at minimum, the subject should perform the uncued task with an initial tilt level at 15 or below.
Run a few iterations of the practice rounds, at short duration.
===Run the Series===
Determine in script the number of blocks.
Run CueingDL1.m
Follow embedded instruction, specifying that it is NOT practice [n] and set the threshold level at a level based on the performance during practice - somewhere in the range of 15-20 should be ideal.
hl0thyzwl50lq9iccs1w6rxdzuvj3id
171
170
2015-11-19T18:58:35Z
Pdonnelly
2
/* Introduction */
wikitext
text/x-wiki
__TOC__
==Attention: Spatial Cueing==
Spatial Cueing testing takes place in CHDD in Room 370
# Ensure that Linux system is ready - with Chin Rest set-up
# Perform Vision Test
Load MatLab. Navigate to the Code directory
cd /.../code
Ensure the script shows the correct monitor "CHDD"
===Introduction===
In this task you are going to play a game. In this game you are going to see a bunch of blurred circles with two stripes down the middle. This is what they look like:
[[File:Gabors1.jpg|800px]]
Here's a close-up so you can see that they are intentionally a blurry. Do you notice anything about the blurred circles?
[[File:GaborsCloseUp.jpg]]
When you're playing the game, you're going to rest your chin here, and focus on the + in the middle of the screen. This is important because the game is to find the blurred circle that is tilted to one side.
As you can see all eight of the circles are the same except for one. For each task in the game you will need to find the one that is shifted like this* or like this*
*During demonstration, use your hand to show the tilt of the blurred lines, pointing them toward the corners of the monitor, avoiding descriptions of Left or Right, or Clockwise or Counter-Clockwise
After finding the one that is different, you will press either the Left Arrow key or the Right arrow key, depending on whether the tops of the blurred lines are pointing toward the Left or Right corner of the monitor.
In the last image, you can see that the lines are shifted so that they "point" at the upper right corner of the monitor. In this case, you would press the Right Arrow key.
Here's one where you would press the Left Arrow key:
[[File:Gabors2.jpg|800px]]
It doesn't matter which side of the screen the blurred circle is on, so even though the different circle is on the right side of the screen, you would still press the Left Arrow since the tops of the blurred lines are pointing toward the Left Corner of the Monitor.
It may seem easy now, but the tricky part of the task is that it gets harder and harder to tell which way the lines are tilted, to the point that you have to pay extra close attention. Which one is different in this picture?
[[File:SmallTilt.jpg|800px]]
If you get it right, you will hear a short ding and see a green star in the middle of the screen over the +. This is what it looks like:
[[File:FeedbackStar.jpg|800px]]
If you get it wrong, you won't hear anything, but you will see a red cross in the middle of the screen.
[[File:RedCross.jpg|800px]]
Let's try a practice round!
Remember, keep your chin on the rest and focus hard on that + in the middle of the screen to give your eyes the best chance of detecting which circle is not like the others!
===Uncued Practice===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the uncued version [0], the long version [y], and intialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Introduce the Cued task===
In the second task, we're going to give you a bit of help by showing a red dot next to the blurry circle that will be different. Everything else is the same - you will still press either the Left or Right arrow based on which way the lines are pointing - the only difference is that you won't have to figure out which one it is that is different.
This is what it will look like:
[[File:CuedGabor.jpg|800px]]
Or this:
[[File:CuedGabor2.jpg|800px]]
Let's practice this version!
Remember to keep you chin on the rest, and focus on that +!
===Cued Practice===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the cued version [1], the long version [y], and intialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Introduce the Single Stimulus===
The final task is even easier. Now there will only be one of the blurry circles on the screen, so you won't be distracted by having all 8. Same rules apply for this one - all you need to do is figure out which way those blurred lines are pointing.
Here's an example:
[[File:SingleStim3.jpg|800px]]
I think you've gotten the hang of it, but lets practice this one too. This time we won't go for as long though.
===Practice Single Stimulus===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the single stimulus version [2], the shorter version [n], and intialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Additional Practice===
This is a subjective decision following the above practice to add additional practice rounds based on how the subject is feeling and performing on each stimulus type.
Regardless, at minimum, the subject should perform the uncued task with an initial tilt level at 15 or below.
Run a few iterations of the practice rounds, at short duration.
===Run the Series===
Determine in script the number of blocks.
Run CueingDL1.m
Follow embedded instruction, specifying that it is NOT practice [n] and set the threshold level at a level based on the performance during practice - somewhere in the range of 15-20 should be ideal.
bn1mwd8rqzvx6c4mne4mf7u38i9eh0n
172
171
2015-11-19T21:07:29Z
Pdonnelly
2
/* Attention: Spatial Cueing */
wikitext
text/x-wiki
__TOC__
==Attention: Spatial Cueing==
Spatial Cueing testing takes place in CHDD in Room 370
# Ensure that Linux system is ready - with Chin Rest set-up
# Perform Vision Test
Load MatLab. Navigate to the Code directory
cd /.../code
Ensure the script shows the correct monitor "CHDD"
===Introduction===
In this task you are going to play a game. In this game you are going to see a bunch of blurred circles with two stripes down the middle. This is what they look like:
[[File:Gabors1.jpg|800px]]
Here's a close-up so you can see that they are intentionally a blurry. Do you notice anything about the lines?
[[File:GaborsCloseUp.jpg]]
When you're playing the game, you're going to rest your chin here, and focus on the + in the middle of the screen. This is important because the game is to find the lines that is tilted to the side.
As you can see all eight of the lines are the same except for one. For each task in the game you will need to find the one that is tilted like this* or like this*
*During demonstration, use your hand to show the tilt of the lines, pointing them toward the corners of the monitor, avoiding descriptions of Left or Right, or Clockwise or Counter-Clockwise
After finding the the tilted lines, you will press either the Left Arrow key or the Right arrow key, depending on whether the tops of the lines are pointing toward the Left or Right corner of the monitor.
In the last image, you can see that the lines are tilted so that they "point" at the upper right corner of the monitor. In this case, you would press the Right Arrow key.
Here's one where you would press the Left Arrow key:
[[File:Gabors2.jpg|800px]]
It doesn't matter which side of the screen the tilted lines are on, so even if the lines are on the right side of the screen, you would still press the Left Arrow since the tops of the blurred lines are pointing toward the Left Corner of the Monitor.
It may seem easy now, but the tricky part of the task is that it gets harder and harder to tell which way the lines are tilted, to the point that you have to pay extra close attention. Which one is different in this picture?
[[File:SmallTilt.jpg|800px]]
If you get it right, you will hear a short ding and see a green star in the middle of the screen over the +. This is what it looks like:
[[File:FeedbackStar.jpg|800px]]
If you get it wrong, you won't hear anything, but you will see a red cross in the middle of the screen.
[[File:RedCross.jpg|800px]]
Let's try a practice round!
Remember, keep your chin on the rest and focus hard on that + in the middle of the screen to give your eyes the best chance of detecting which lines are not like the others!
===Uncued Practice===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the uncued version [0], the long version [y], and initialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Introduce the Cued task===
In the second task, we're going to give you a bit of help by showing a red dot next to the lines that will be tilted. Everything else is the same - you will still press either the Left or Right arrow based on which way the lines are tilted - the only difference is that you won't have to figure out which one it is.
This is what it will look like:
[[File:CuedGabor.jpg|800px]]
Or this:
[[File:CuedGabor2.jpg|800px]]
Let's practice this version!
Remember to keep you chin on the rest, and focus on that +!
===Cued Practice===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the cued version [1], the long version [y], and initialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Introduce the Single Stimulus===
The final task is even easier. Now there will only be one set of lines on the screen, so you won't be distracted by having all 8. Same rules apply for this one - all you need to do is figure out which way those blurry lines are tilted.
Here's an example:
[[File:SingleStim3.jpg|800px]]
I think you've gotten the hang of it, but lets practice this one too. This time we won't go for as long though.
===Practice Single Stimulus===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the single stimulus version [2], the shorter version [n], and initialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Additional Practice===
This is a subjective decision following the above practice to add additional practice rounds based on how the subject is feeling and performing on each stimulus type.
Regardless, at minimum, the subject should perform the uncued task with an initial tilt level at 15 or below.
Run a few iterations of the practice rounds, at short duration.
===Run the Series===
Determine in script the number of blocks.
Run CueingDL1.m
Follow embedded instruction, specifying that it is NOT practice [n] and set the threshold level at a level based on the performance during practice - somewhere in the range of 15-20 should be ideal.
kivfqbnky7xm9un6ytb8g8qjtfvdgli
173
172
2015-11-19T21:09:40Z
Pdonnelly
2
/* Introduction */
wikitext
text/x-wiki
__TOC__
==Attention: Spatial Cueing==
Spatial Cueing testing takes place in CHDD in Room 370
# Ensure that Linux system is ready - with Chin Rest set-up
# Perform Vision Test
Load MatLab. Navigate to the Code directory
cd /.../code
Ensure the script shows the correct monitor "CHDD"
===Introduction===
In this task you are going to play a game. In this game you are going to see a bunch of blurred circles with two stripes down the middle. This is what they look like:
[[File:Gabors1.jpg|800px]]
Here's a close-up so you can see that they are intentionally a blurry. Do you notice anything about the lines?
[[File:GaborsCloseUp|800px.jpg]]
When you're playing the game, you're going to rest your chin here, and focus on the + in the middle of the screen. This is important because the game is to find the lines that is tilted to the side.
As you can see all eight of the lines are the same except for one. For each task in the game you will need to find the one that is tilted like this* or like this*
*During demonstration, use your hand to show the tilt of the lines, pointing them toward the corners of the monitor, avoiding descriptions of Left or Right, or Clockwise or Counter-Clockwise
After finding the the tilted lines, you will press either the Left Arrow key or the Right arrow key, depending on whether the tops of the lines are pointing toward the Left or Right corner of the monitor.
In the last image, you can see that the lines are tilted so that they "point" at the upper right corner of the monitor. In this case, you would press the Right Arrow key.
Here's one where you would press the Left Arrow key:
[[File:Gabors2.jpg|800px]]
It doesn't matter which side of the screen the tilted lines are on, so even if the lines are on the right side of the screen, you would still press the Left Arrow since the tops of the blurred lines are pointing toward the Left Corner of the Monitor.
It may seem easy now, but the tricky part of the task is that it gets harder and harder to tell which way the lines are tilted, to the point that you have to pay extra close attention. Which one is different in this picture?
[[File:SmallTilt.jpg|800px]]
If you get it right, you will hear a short ding and see a green star in the middle of the screen over the +. This is what it looks like:
[[File:FeedbackStar.jpg|800px]]
If you get it wrong, you won't hear anything, but you will see a red cross in the middle of the screen.
[[File:RedCross.jpg|800px]]
Let's try a practice round!
Remember, keep your chin on the rest and focus hard on that + in the middle of the screen to give your eyes the best chance of detecting which lines are not like the others!
===Uncued Practice===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the uncued version [0], the long version [y], and initialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Introduce the Cued task===
In the second task, we're going to give you a bit of help by showing a red dot next to the lines that will be tilted. Everything else is the same - you will still press either the Left or Right arrow based on which way the lines are tilted - the only difference is that you won't have to figure out which one it is.
This is what it will look like:
[[File:CuedGabor.jpg|800px]]
Or this:
[[File:CuedGabor2.jpg|800px]]
Let's practice this version!
Remember to keep you chin on the rest, and focus on that +!
===Cued Practice===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the cued version [1], the long version [y], and initialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Introduce the Single Stimulus===
The final task is even easier. Now there will only be one set of lines on the screen, so you won't be distracted by having all 8. Same rules apply for this one - all you need to do is figure out which way those blurry lines are tilted.
Here's an example:
[[File:SingleStim3.jpg|800px]]
I think you've gotten the hang of it, but lets practice this one too. This time we won't go for as long though.
===Practice Single Stimulus===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the single stimulus version [2], the shorter version [n], and initialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Additional Practice===
This is a subjective decision following the above practice to add additional practice rounds based on how the subject is feeling and performing on each stimulus type.
Regardless, at minimum, the subject should perform the uncued task with an initial tilt level at 15 or below.
Run a few iterations of the practice rounds, at short duration.
===Run the Series===
Determine in script the number of blocks.
Run CueingDL1.m
Follow embedded instruction, specifying that it is NOT practice [n] and set the threshold level at a level based on the performance during practice - somewhere in the range of 15-20 should be ideal.
7qzcnyjfozvgqm27nz99n745yo6ob6f
174
173
2015-11-19T21:14:57Z
Pdonnelly
2
/* Attention: Spatial Cueing */
wikitext
text/x-wiki
__TOC__
==Attention: Spatial Cueing==
Spatial Cueing testing takes place in CHDD in Room 370
# Ensure that Linux system is ready - with Chin Rest set-up
# Perform Vision Test
Load MatLab. Navigate to the Code directory
cd /.../code
Ensure the script shows the correct monitor "CHDD"
===Introduction===
In this task you are going to play a game. In this game you are going to see a bunch of blurred circles with two stripes down the middle. This is what they look like:
[[File:Gabors1.jpg|800px]]
Here's a close-up so you can see that they are intentionally blurry. Do you notice anything about the stripes?
[[File:GaborsCloseUp|800px.jpg]]
When you're playing the game, you're going to rest your chin here, and focus on the + in the middle of the screen. This is important because the game is to find the stripes that are tilted to the side.
As you can see all eight of the stripes are the same except for one. For each task in the game you will need to find the ones that are tilted like this* or like this*
*During demonstration, use your hand to show the tilt of the stripes, pointing them toward the corners of the monitor, avoiding descriptions of Left or Right, or Clockwise or Counter-Clockwise
After finding the the tilted stripes, you will press either the Left Arrow key or the Right arrow key, depending on whether the tops of the stripes are pointing toward the Left or Right corner of the monitor.
In the last image, you can see that the stripes are tilted so that they "point" at the upper right corner of the monitor. In this case, you would press the Right Arrow key.
Here's one where you would press the Left Arrow key:
[[File:Gabors2.jpg|800px]]
It doesn't matter which side of the screen the tilted stripes are on, so even if they are on the right side of the screen, you would still press the Left Arrow since the tops of the stripes are pointing toward the Left Corner of the Monitor.
It may seem easy now, but the tricky part of the task is that it gets harder and harder to tell which way the stripes are tilted, to the point that you have to pay extra close attention. Which one is different in this picture?
[[File:SmallTilt.jpg|800px]]
If you get it right, you will hear a short ding and see a green star in the middle of the screen over the +. This is what it looks like:
[[File:FeedbackStar.jpg|800px]]
If you get it wrong, you won't hear anything, but you will see a red cross in the middle of the screen.
[[File:RedCross.jpg|800px]]
Let's try a practice round!
Remember, keep your chin on the rest and focus hard on that + in the middle of the screen to give your eyes the best chance of detecting which lines are not like the others!
===Uncued Practice===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the uncued version [0], the long version [y], and initialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Introduce the Cued task===
In the second task, we're going to give you a bit of help by showing a red dot next to the stripes that will be tilted. Everything else is the same - you will still press either the Left or Right arrow based on which way the stripes are tilted - the only difference is that you won't have to figure out which one those are.
This is what it will look like:
[[File:CuedGabor.jpg|800px]]
Or this:
[[File:CuedGabor2.jpg|800px]]
Let's practice this version!
Remember to keep you chin on the rest, and focus on that +!
===Cued Practice===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the cued version [1], the long version [y], and initialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Introduce the Single Stimulus===
The final task is even easier. Now there will only be one set of stripes on the screen, so you won't be distracted by having all 8. Same rules apply for this one - all you need to do is figure out which way those blurry stripes are tilted.
Here's an example:
[[File:SingleStim3.jpg|800px]]
I think you've gotten the hang of it, but lets practice this one too. This time we won't go for as long though.
===Practice Single Stimulus===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the single stimulus version [2], the shorter version [n], and initialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Additional Practice===
This is a subjective decision following the above practice to add additional practice rounds based on how the subject is feeling and performing on each stimulus type.
Regardless, at minimum, the subject should perform the uncued task with an initial tilt level at 15 or below.
Run a few iterations of the practice rounds, at short duration.
===Run the Series===
Determine in script the number of blocks.
Run CueingDL1.m
Follow embedded instruction, specifying that it is NOT practice [n] and set the threshold level at a level based on the performance during practice - somewhere in the range of 15-20 should be ideal.
hgb96throq5xc18yx132a5gtu2t1n51
175
174
2015-11-19T21:15:29Z
Pdonnelly
2
/* Introduction */
wikitext
text/x-wiki
__TOC__
==Attention: Spatial Cueing==
Spatial Cueing testing takes place in CHDD in Room 370
# Ensure that Linux system is ready - with Chin Rest set-up
# Perform Vision Test
Load MatLab. Navigate to the Code directory
cd /.../code
Ensure the script shows the correct monitor "CHDD"
===Introduction===
In this task you are going to play a game. In this game you are going to see a bunch of blurred circles with two stripes down the middle. This is what they look like:
[[File:Gabors1.jpg|800px]]
Here's a close-up so you can see that they are intentionally blurry. Do you notice anything about the stripes?
[[File:GaborsCloseUp.jpg|800px]]
When you're playing the game, you're going to rest your chin here, and focus on the + in the middle of the screen. This is important because the game is to find the stripes that are tilted to the side.
As you can see all eight of the stripes are the same except for one. For each task in the game you will need to find the ones that are tilted like this* or like this*
*During demonstration, use your hand to show the tilt of the stripes, pointing them toward the corners of the monitor, avoiding descriptions of Left or Right, or Clockwise or Counter-Clockwise
After finding the the tilted stripes, you will press either the Left Arrow key or the Right arrow key, depending on whether the tops of the stripes are pointing toward the Left or Right corner of the monitor.
In the last image, you can see that the stripes are tilted so that they "point" at the upper right corner of the monitor. In this case, you would press the Right Arrow key.
Here's one where you would press the Left Arrow key:
[[File:Gabors2.jpg|800px]]
It doesn't matter which side of the screen the tilted stripes are on, so even if they are on the right side of the screen, you would still press the Left Arrow since the tops of the stripes are pointing toward the Left Corner of the Monitor.
It may seem easy now, but the tricky part of the task is that it gets harder and harder to tell which way the stripes are tilted, to the point that you have to pay extra close attention. Which one is different in this picture?
[[File:SmallTilt.jpg|800px]]
If you get it right, you will hear a short ding and see a green star in the middle of the screen over the +. This is what it looks like:
[[File:FeedbackStar.jpg|800px]]
If you get it wrong, you won't hear anything, but you will see a red cross in the middle of the screen.
[[File:RedCross.jpg|800px]]
Let's try a practice round!
Remember, keep your chin on the rest and focus hard on that + in the middle of the screen to give your eyes the best chance of detecting which lines are not like the others!
===Uncued Practice===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the uncued version [0], the long version [y], and initialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Introduce the Cued task===
In the second task, we're going to give you a bit of help by showing a red dot next to the stripes that will be tilted. Everything else is the same - you will still press either the Left or Right arrow based on which way the stripes are tilted - the only difference is that you won't have to figure out which one those are.
This is what it will look like:
[[File:CuedGabor.jpg|800px]]
Or this:
[[File:CuedGabor2.jpg|800px]]
Let's practice this version!
Remember to keep you chin on the rest, and focus on that +!
===Cued Practice===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the cued version [1], the long version [y], and initialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Introduce the Single Stimulus===
The final task is even easier. Now there will only be one set of stripes on the screen, so you won't be distracted by having all 8. Same rules apply for this one - all you need to do is figure out which way those blurry stripes are tilted.
Here's an example:
[[File:SingleStim3.jpg|800px]]
I think you've gotten the hang of it, but lets practice this one too. This time we won't go for as long though.
===Practice Single Stimulus===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the single stimulus version [2], the shorter version [n], and initialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Additional Practice===
This is a subjective decision following the above practice to add additional practice rounds based on how the subject is feeling and performing on each stimulus type.
Regardless, at minimum, the subject should perform the uncued task with an initial tilt level at 15 or below.
Run a few iterations of the practice rounds, at short duration.
===Run the Series===
Determine in script the number of blocks.
Run CueingDL1.m
Follow embedded instruction, specifying that it is NOT practice [n] and set the threshold level at a level based on the performance during practice - somewhere in the range of 15-20 should be ideal.
tvp84yg39w38gxbxd0gb3hdgu293kv4
176
175
2015-11-20T18:28:59Z
Pdonnelly
2
/* Attention: Spatial Cueing */
wikitext
text/x-wiki
__TOC__
==Attention: Spatial Cueing==
Spatial Cueing testing takes place in CHDD in Room 370
# Ensure that Linux system is ready - with Chin Rest set-up
# Perform Vision Test
Load MatLab. Navigate to the Code directory
cd /.../code
Ensure the script shows the correct monitor "CHDD"
===Introduction===
In this task you are going to play a game. In this game you are going to see a bunch of blurred circles with two stripes down the middle. This is what they look like:
[[File:Gabors1.jpg|800px]]
Here's a close-up so you can see that they are intentionally blurry. Do you notice anything about the stripes?
[[File:GaborsCloseUp.jpg|800px]]
When you're playing the game, you're going to rest your chin here, and focus on the + in the middle of the screen. This is important because the game is to find the stripes that are tilted to the side.
As you can see all eight of the stripes are the same except for one. For each task in the game you will need to find the ones that are tilted like this* or like this*
*During demonstration, use your hand to show the tilt of the stripes, pointing them toward the corners of the monitor, avoiding descriptions of Left or Right, or Clockwise or Counter-Clockwise
After finding the the tilted stripes, you will press either the Left Arrow key or the Right arrow key, depending on whether the tops of the stripes are pointing toward the Left or Right corner of the monitor.
In the last image, you can see that the stripes are tilted so that they "point" at the upper right corner of the monitor. In this case, you would press the Right Arrow key.
Here's one where you would press the Left Arrow key:
[[File:Gabors2.jpg|800px]]
It doesn't matter which side of the screen the tilted stripes are on, so even if they are on the right side of the screen, you would still press the Left Arrow since the tops of the stripes are pointing toward the Left Corner of the Monitor.
It may seem easy now, but the tricky part of the task is that it gets harder and harder to tell which way the stripes are tilted, to the point that you have to pay extra close attention. Which one is different in this picture?
[[File:SmallTilt.jpg|800px]]
If you get it right, you will hear a short ding and see a green star in the middle of the screen over the +. This is what it looks like:
[[File:FeedbackStar.jpg|800px]]
If you get it wrong, you won't hear anything, but you will see a red cross in the middle of the screen.
[[File:RedCross.jpg|800px]]
Let's try a practice round!
Remember, keep your chin on the rest and focus hard on that + in the middle of the screen to give your eyes the best chance of detecting which stripes are not like the others!
===Uncued Practice===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the uncued version [0], the long version [y], and initialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Introduce the Cued task===
In the second task, we're going to give you a bit of help by showing a red dot next to the stripes that will be tilted. Everything else is the same - you will still press either the Left or Right arrow based on which way the stripes are tilted - the only difference is that you won't have to figure out which one those are.
This is what it will look like:
[[File:CuedGabor.jpg|800px]]
Or this:
[[File:CuedGabor2.jpg|800px]]
Let's practice this version!
Remember to keep you chin on the rest, and focus on that +!
===Cued Practice===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the cued version [1], the long version [y], and initialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Introduce the Single Stimulus===
The final task is even easier. Now there will only be one set of stripes on the screen, so you won't be distracted by having all 8. Same rules apply for this one - all you need to do is figure out which way those blurry stripes are tilted.
Here's an example:
[[File:SingleStim3.jpg|800px]]
I think you've gotten the hang of it, but lets practice this one too. This time we won't go for as long though.
===Practice Single Stimulus===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the single stimulus version [2], the shorter version [n], and initialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Additional Practice===
This is a subjective decision following the above practice to add additional practice rounds based on how the subject is feeling and performing on each stimulus type.
Regardless, at minimum, the subject should perform the uncued task with an initial tilt level at 15 or below.
Run a few iterations of the practice rounds, at short duration.
===Run the Series===
Determine in script the number of blocks.
Run CueingDL1.m
Follow embedded instruction, specifying that it is NOT practice [n] and set the threshold level at a level based on the performance during practice - somewhere in the range of 15-20 should be ideal.
p753lsmtiv2o7woay25lnf2obyxg9j0
178
176
2015-11-30T22:22:19Z
Pdonnelly
2
/* Attention: Spatial Cueing */
wikitext
text/x-wiki
__TOC__
==Attention: Spatial Cueing==
Spatial Cueing testing takes place in CHDD in Room 370
# Ensure that Linux system is ready - with Chin Rest set-up
# Perform Vision Test
Open Terminal. Type "sudo ptb3-matlab", then enter the password.
MatLab will load. Navigate to the Code directory
cd /.../code
Ensure the script shows the correct monitor "CHDD"
===Introduction===
In this task you are going to play a game. In this game you are going to see patches of stripes arranged in a circle. This is what they look like:
[[File:Gabors1.jpg|800px]]
Here's a close-up so you can see that they are intentionally blurry. Do you notice anything about the stripes?
[[File:GaborsCloseUp.jpg|800px]]
When you're playing the game, you're going to rest your chin here, and focus on the + in the middle of the screen. This is important because the game is to find the stripes that are tilted to the side.
As you can see all eight of the stripes are the same except for one. For each task in the game you will need to find the ones that are tilted like this* or like this*
*During demonstration, use your hand to show the tilt of the stripes, pointing them toward the corners of the monitor, avoiding descriptions of Left or Right, or Clockwise or Counter-Clockwise
After finding the the tilted stripes, you will press either the Left Arrow key or the Right arrow key, depending on whether the tops of the stripes are pointing toward the Left or Right corner of the monitor.
In the last image, you can see that the stripes are tilted so that they "point" at the upper right corner of the monitor. In this case, you would press the Right Arrow key.
Here's one where you would press the Left Arrow key:
[[File:Gabors2.jpg|800px]]
It doesn't matter which side of the screen the tilted stripes are on, so even if they are on the right side of the screen, you would still press the Left Arrow since the tops of the stripes are pointing toward the Left Corner of the Monitor.
It may seem easy now, but the tricky part of the task is that it gets harder and harder to tell which way the stripes are tilted, to the point that you have to pay extra close attention. Which one is different in this picture?
[[File:SmallTilt.jpg|800px]]
If you get it right, you will hear a short ding and see a green star in the middle of the screen over the +. This is what it looks like:
[[File:FeedbackStar.jpg|800px]]
If you get it wrong, you won't hear anything, but you will see a red cross in the middle of the screen.
[[File:RedCross.jpg|800px]]
Let's try a practice round!
Remember, keep your chin on the rest and focus hard on that + in the middle of the screen to give your eyes the best chance of detecting which stripes are not like the others!
===Uncued Practice===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the uncued version [0], the long version [y], and initialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Introduce the Cued task===
In the second task, we're going to give you a bit of help by showing a red dot next to the stripes that will be tilted. Everything else is the same - you will still press either the Left or Right arrow based on which way the stripes are tilted - the only difference is that you won't have to figure out which one those are.
This is what it will look like:
[[File:CuedGabor.jpg|800px]]
Or this:
[[File:CuedGabor2.jpg|800px]]
Let's practice this version!
Remember to keep you chin on the rest, and focus on that +!
===Cued Practice===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the cued version [1], the long version [y], and initialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Introduce the Single Stimulus===
The final task is even easier. Now there will only be one set of stripes on the screen, so you won't be distracted by having all 8. Same rules apply for this one - all you need to do is figure out which way those blurry stripes are tilted.
Here's an example:
[[File:SingleStim3.jpg|800px]]
I think you've gotten the hang of it, but lets practice this one too. This time we won't go for as long though.
===Practice Single Stimulus===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the single stimulus version [2], the shorter version [n], and initialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Additional Practice===
This is a subjective decision following the above practice to add additional practice rounds based on how the subject is feeling and performing on each stimulus type.
Regardless, at minimum, the subject should perform the uncued task with an initial tilt level at 15 or below.
Run a few iterations of the practice rounds, at short duration.
===Run the Series===
Determine in script the number of blocks.
Run CueingDL1.m
Follow embedded instruction, specifying that it is NOT practice [n] and set the threshold level at a level based on the performance during practice - somewhere in the range of 15-20 should be ideal.
kivc852bps87iv2rnb0bxivgety2vfi
180
178
2015-12-01T23:17:51Z
Alexwhite
3
/* Introduction */
wikitext
text/x-wiki
__TOC__
==Attention: Spatial Cueing==
Spatial Cueing testing takes place in CHDD in Room 370
# Ensure that Linux system is ready - with Chin Rest set-up
# Perform Vision Test
Open Terminal. Type "sudo ptb3-matlab", then enter the password.
MatLab will load. Navigate to the Code directory
cd /.../code
Ensure the script shows the correct monitor "CHDD"
===Introduction===
In this task you are going to play a game, and your goal is to win as many points as possible. In this game you are going to see patches of stripes arranged in a circle. This is what they look like:
[[File:Gabors1.jpg|800px]]
Here's a close-up so you can see that they are intentionally blurry. Do you notice anything about the stripes?
[[File:GaborsCloseUp.jpg|800px]]
When you're playing the game, you're going to rest your chin here, and focus on the + in the middle of the screen. This is important because the game is to find the stripes that are tilted to the side.
As you can see all eight of the stripes are the same except for one. For each task in the game you will need to find the ones that are tilted like this* or like this*
*During demonstration, use your hand to show the tilt of the stripes, pointing them toward the corners of the monitor, avoiding descriptions of Left or Right, or Clockwise or Counter-Clockwise
After finding the the tilted stripes, you will press either the Left Arrow key or the Right arrow key, depending on whether the tops of the stripes are pointing toward the Left or Right corner of the monitor. If you are correct, you win three points!
In the last image, you can see that the stripes are tilted so that they "point" at the upper right corner of the monitor. In this case, you would press the Right Arrow key.
Here's one where you would press the Left Arrow key:
[[File:Gabors2.jpg|800px]]
It doesn't matter which side of the screen the tilted stripes are on, so even if they are on the right side of the screen, you would still press the Left Arrow since the tops of the stripes are pointing toward the Left Corner of the Monitor.
It may seem easy now, but the tricky part of the task is that it gets harder and harder to tell which way the stripes are tilted, to the point that you have to pay extra close attention. Which one is different in this picture?
[[File:SmallTilt.jpg|800px]]
If you get it right, you will hear a short ding and see a green star in the middle of the screen over the +. This is what it looks like:
[[File:FeedbackStar.jpg|800px]]
If you get it wrong, you won't hear anything, but you will see a red cross in the middle of the screen.
[[File:RedCross.jpg|800px]]
Let's try a practice round!
Remember, keep your chin on the rest and focus hard on that + in the middle of the screen to give your eyes the best chance of detecting which stripes are not like the others!
===Uncued Practice===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the uncued version [0], the long version [y], and initialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Introduce the Cued task===
In the second task, we're going to give you a bit of help by showing a red dot next to the stripes that will be tilted. Everything else is the same - you will still press either the Left or Right arrow based on which way the stripes are tilted - the only difference is that you won't have to figure out which one those are.
This is what it will look like:
[[File:CuedGabor.jpg|800px]]
Or this:
[[File:CuedGabor2.jpg|800px]]
Let's practice this version!
Remember to keep you chin on the rest, and focus on that +!
===Cued Practice===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the cued version [1], the long version [y], and initialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Introduce the Single Stimulus===
The final task is even easier. Now there will only be one set of stripes on the screen, so you won't be distracted by having all 8. Same rules apply for this one - all you need to do is figure out which way those blurry stripes are tilted.
Here's an example:
[[File:SingleStim3.jpg|800px]]
I think you've gotten the hang of it, but lets practice this one too. This time we won't go for as long though.
===Practice Single Stimulus===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the single stimulus version [2], the shorter version [n], and initialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Additional Practice===
This is a subjective decision following the above practice to add additional practice rounds based on how the subject is feeling and performing on each stimulus type.
Regardless, at minimum, the subject should perform the uncued task with an initial tilt level at 15 or below.
Run a few iterations of the practice rounds, at short duration.
===Run the Series===
Determine in script the number of blocks.
Run CueingDL1.m
Follow embedded instruction, specifying that it is NOT practice [n] and set the threshold level at a level based on the performance during practice - somewhere in the range of 15-20 should be ideal.
hay4p2v1wk33zsc0m81s7w4ul1ejzsj
186
180
2015-12-02T22:05:28Z
Pdonnelly
2
/* Introduction */
wikitext
text/x-wiki
__TOC__
==Attention: Spatial Cueing==
Spatial Cueing testing takes place in CHDD in Room 370
# Ensure that Linux system is ready - with Chin Rest set-up
# Perform Vision Test
Open Terminal. Type "sudo ptb3-matlab", then enter the password.
MatLab will load. Navigate to the Code directory
cd /.../code
Ensure the script shows the correct monitor "CHDD"
===Introduction===
In this task you are going to play a game, and your goal is to win as many points as possible. In this game you are going to see patches of stripes arranged in a circle. This is what they look like:
[[File:Gabors1.jpg|800px]]
Here's a close-up so you can see that they are intentionally blurry. Do you notice anything about the stripes?
[[File:GaborsCloseUp.jpg|800px]]
When you're playing the game, you're going to rest your chin here, and focus on the + in the middle of the screen. This is important because the game is to find the stripes that are tilted to the side.
As you can see all eight of the stripes are the same except for one. For each task in the game you will need to find the ones that are tilted like this* or like this*
*During demonstration, use your hand to show the tilt of the stripes, pointing them toward the corners of the monitor, avoiding descriptions of Left or Right, or Clockwise or Counter-Clockwise
After finding the the tilted stripes, you will press either the Left Arrow key or the Right arrow key, depending on whether the tops of the stripes are pointing toward the Left or Right corner of the monitor. If you are correct, you win three points!
In the last image, you can see that the stripes are tilted so that they "point" at the upper right corner of the monitor. In this case, you would press the Right Arrow key.
Here's one where you would press the Left Arrow key:
[[File:Gabors2.jpg|800px]]
It doesn't matter which side of the screen the tilted stripes are on, so even if they are on the right side of the screen, you would still press the Left Arrow since the tops of the stripes are pointing toward the Left Corner of the Monitor.
It may seem easy now, but the tricky part of the task is that it gets harder and harder to tell which way the stripes are tilted, to the point that you have to pay extra close attention. Which one is different in this picture?
[[File:SmallTilt.jpg|800px]]
Because this is a game, you'll get points for correct responses and your goal is to see how many points you can get. If you get over 700 points you'll get a prize to take home!
If you get it right, you will hear a short ding and see a +3 in green at the center of the screen. This is what it looks like:
[[File:feedbackPointscorrect.jpg|800px]]
If you get it wrong, you will hear a different sound and +0 in the middle of the screen in red.
[[File:feedbackPointsError.jpg|800px]]
Let's try a practice round!
Remember, keep your chin on the rest and focus hard on that + in the middle of the screen to give your eyes the best chance of detecting which stripes are not like the others!
===Uncued Practice===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the uncued version [0], the long version [y], and initialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Introduce the Cued task===
In the second task, we're going to give you a bit of help by showing a red dot next to the stripes that will be tilted. Everything else is the same - you will still press either the Left or Right arrow based on which way the stripes are tilted - the only difference is that you won't have to figure out which one those are.
This is what it will look like:
[[File:CuedGabor.jpg|800px]]
Or this:
[[File:CuedGabor2.jpg|800px]]
Let's practice this version!
Remember to keep you chin on the rest, and focus on that +!
===Cued Practice===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the cued version [1], the long version [y], and initialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Introduce the Single Stimulus===
The final task is even easier. Now there will only be one set of stripes on the screen, so you won't be distracted by having all 8. Same rules apply for this one - all you need to do is figure out which way those blurry stripes are tilted.
Here's an example:
[[File:SingleStim3.jpg|800px]]
I think you've gotten the hang of it, but lets practice this one too. This time we won't go for as long though.
===Practice Single Stimulus===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the single stimulus version [2], the shorter version [n], and initialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Additional Practice===
This is a subjective decision following the above practice to add additional practice rounds based on how the subject is feeling and performing on each stimulus type.
Regardless, at minimum, the subject should perform the uncued task with an initial tilt level at 15 or below.
Run a few iterations of the practice rounds, at short duration.
===Run the Series===
Determine in script the number of blocks.
Run CueingDL1.m
Follow embedded instruction, specifying that it is NOT practice [n] and set the threshold level at a level based on the performance during practice - somewhere in the range of 15-20 should be ideal.
24oobshpo1e1f9gieent1yvngwcwc6g
187
186
2015-12-02T22:09:15Z
Pdonnelly
2
/* Attention: Spatial Cueing */
wikitext
text/x-wiki
__TOC__
==Attention: Spatial Cueing==
Spatial Cueing testing takes place in CHDD in Room 370
# Ensure that Linux system is ready - with Chin Rest set-up
# Perform Vision Test
Open Terminal. Type "sudo ptb3-matlab", then enter the password.
MatLab will load. Navigate to the Code directory
cd /.../code
Ensure the script shows the correct monitor "CHDD"
===Introduction===
In this task you are going to play a game, and your goal is to win as many points as possible. In this game you are going to see patches of stripes arranged in a circle. This is what they look like:
[[File:Gabors1.jpg|800px]]
Here's a close-up so you can see that they are intentionally blurry. Do you notice anything about the stripes?
[[File:GaborsCloseUp.jpg|800px]]
When you're playing the game, you're going to rest your chin here, and focus on the + in the middle of the screen. This is important because the game is to find the stripes that are tilted to the side.
As you can see all eight of the stripes are the same except for one. For each task in the game you will need to find the ones that are tilted like this* or like this*
*During demonstration, use your hand to show the tilt of the stripes, pointing them toward the corners of the monitor, avoiding descriptions of Left or Right, or Clockwise or Counter-Clockwise
After finding the the tilted stripes, you will press either the Left Arrow key or the Right arrow key, depending on whether the tops of the stripes are pointing toward the Left or Right corner of the monitor. If you are correct, you win three points!
In the last image, you can see that the stripes are tilted so that they "point" at the upper right corner of the monitor. In this case, you would press the Right Arrow key.
Here's one where you would press the Left Arrow key:
[[File:Gabors2.jpg|800px]]
It doesn't matter which side of the screen the tilted stripes are on, so even if they are on the right side of the screen, you would still press the Left Arrow since the tops of the stripes are pointing toward the Left Corner of the Monitor.
It may seem easy now, but the tricky part of the task is that it gets harder and harder to tell which way the stripes are tilted, to the point that you have to pay extra close attention. Which one is different in this picture?
[[File:SmallTilt.jpg|800px]]
Because this is a game, you'll get points for correct responses and your goal is to see how many points you can get. If you get over 700 points you'll get a prize to take home!
If you get it right, you will hear a short ding and see a +3 in green at the center of the screen. This is what it looks like:
[[File:FeedbackPointsCorrect.jpg|800px]]
If you get it wrong, you will hear a different sound and +0 in the middle of the screen in red.
[[File:FeedbackPointsError.jpg|800px]]
Let's try a practice round!
Remember, keep your chin on the rest and focus hard on that + in the middle of the screen to give your eyes the best chance of detecting which stripes are not like the others!
===Uncued Practice===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the uncued version [0], the long version [y], and initialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Introduce the Cued task===
In the second task, we're going to give you a bit of help by showing a red dot next to the stripes that will be tilted. Everything else is the same - you will still press either the Left or Right arrow based on which way the stripes are tilted - the only difference is that you won't have to figure out which one those are.
This is what it will look like:
[[File:CuedGabor.jpg|800px]]
Or this:
[[File:CuedGabor2.jpg|800px]]
Let's practice this version!
Remember to keep you chin on the rest, and focus on that +!
===Cued Practice===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the cued version [1], the long version [y], and initialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Introduce the Single Stimulus===
The final task is even easier. Now there will only be one set of stripes on the screen, so you won't be distracted by having all 8. Same rules apply for this one - all you need to do is figure out which way those blurry stripes are tilted.
Here's an example:
[[File:SingleStim3.jpg|800px]]
I think you've gotten the hang of it, but lets practice this one too. This time we won't go for as long though.
===Practice Single Stimulus===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the single stimulus version [2], the shorter version [n], and initialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Additional Practice===
This is a subjective decision following the above practice to add additional practice rounds based on how the subject is feeling and performing on each stimulus type.
Regardless, at minimum, the subject should perform the uncued task with an initial tilt level at 15 or below.
Run a few iterations of the practice rounds, at short duration.
===Run the Series===
Determine in script the number of blocks.
Run CueingDL1.m
Follow embedded instruction, specifying that it is NOT practice [n] and set the threshold level at a level based on the performance during practice - somewhere in the range of 15-20 should be ideal.
95otnqlp9ujjtihzsrzbbwgticvtn1r
205
187
2015-12-10T23:39:55Z
Pdonnelly
2
wikitext
text/x-wiki
__TOC__
==Attention: Spatial Cueing==
Spatial Cueing testing takes place in CHDD in Room 370
# Ensure that Linux system is ready - with Chin Rest set-up
# Perform Vision Test
Open Terminal. Type "sudo ptb3-matlab", then enter the password.
MatLab will load. Navigate to the Code directory
cd /.../code
Ensure the script shows the correct monitor "CHDD" and the subject number is correct.
===Introduction===
In this task you are going to play a game, and your goal is to win as many points as possible. This game is a vision game and will test your powers of attention and detection.
===Introduce the Single Stimulus===
The first part of the game looks like this:
[[File:SingleStim3.jpg|800px]]
The stripe patch is going to flash on the screen in one of eight spots around the screen and your job is to say which way the stripes are tilted by pressing the correct button on the keyboard. If they are tilted like this* towards this* corner of the screen, press the Left arrow.
If they are tilted like this* towards the other corner of the screen, press the Right arrow instead.
When you're playing the game, you're going to rest your chin here, and focus on the + in the middle of the screen.
*During demonstration, use your hand to show the tilt of the stripes, pointing them toward the corners of the monitor, avoiding descriptions of Left or Right, or Clockwise or Counter-Clockwise
It doesn't matter where the patch of stripes is located on the screen, it only matters which way the stripes are tilted.
If you get it right, you will hear a short ding and see a +3 in green at the center of the screen. This is what it looks like:
[[File:FeedbackPointsCorrect.jpg|800px]]
If you get it wrong, you will hear a different sound and +0 in the middle of the screen in red.
[[File:FeedbackPointsError.jpg|800px]]
Any questions? Ready to try a practice round of this part?
===Practice Single Stimulus===
Follow embedded instructions, specifying that it is practice [y], the single stimulus version [2], the shorter version [n], and initialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Introduce the Cued task===
In the second task, we're going to make it a bit harder. This time there will be 8 of those stripe patches, arranged in a circle and all of them will be the same except one. In this game, we help you out and show a red dot next to the one that will be tilted.
Your job will be to find that red dot, see the stripes that are tilted, and press the correct button. It's important to note that in this game the red dot will ALWAYS be next to the stripes that are tilted and will appear just a split-second before the stripe patches.
This is what it will look like:
[[File:CuedGabor.jpg|800px]]
Or this:
[[File:CuedGabor2.jpg|800px]]
Let's practice this version!
Remember to keep you chin on the rest, and focus on that +!
===Cued Practice===
Continue embedded instructions, specifying that it is another practice [y], the cued version [1], the long/short version, and initialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Uncued task===
In the last part, we make it a bit harder and get rid of that red dot. In this task, your task will be to pay close attention to all of the stripes and find which one is tilted and press the correct button.
In this one, it will be especially important to keep your eyes fixed on that +, so that you can see all of the patches, since you don't know which one is going to be different.
This is what they look like?
[[File:GaborsCloseUp.jpg|800px]]
As you can see all eight of the stripes are the same except for one.
[[File:Gabors2.jpg|800px]]
It doesn't matter which side of the screen the tilted stripes are on, so even if they are on the right side of the screen, you would still press the Left Arrow since the tops of the stripes are pointing toward the Left Corner of the Monitor.
It may seem easy now, but the tricky part of the task is that it gets harder and harder to tell which way the stripes are tilted, to the point that you have to pay extra close attention. Which one is different in this picture?
[[File:SmallTilt.jpg|800px]]
===Uncued Practice===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the uncued version [0], the long version [y], and initialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Additional Practice===
This is a subjective decision following the above practice to add additional practice rounds based on how the subject is feeling and performing on each stimulus type.
Regardless, at minimum, the subject should perform the uncued task with an initial tilt level at 15 or below.
Run a few iterations of the practice rounds, at short duration.
===Block Series===
Now that you're done with the practice and proven yourself an expert, lets put those skills to the test. In the real game, you will play the different tasks in this order: single path, uncued, cued, uncued, cued, single patch. Each game is going to have more of each task and we're going to add a feature to make it a bit harder.
This game is designed so that every time you get the right answer, the next one is going to be a bit harder by making the tilt harder and harder to see*.
*demonstrate with your hand that the tilt angle will get progressively shallower.
The game works the opposite way too - if you get one wrong, the next one is going to be a bit easier.
You might have noticed that when you were doing the practice every time you got a correct answer you saw a +3 in the middle of the screen, and every time you got it wrong you got a +0. In the practice the points didn't matter, but now that we're starting the real game, the challenge is to see how many points you can get!
To win the game, you need to get more than 700 points!
<br>
Remember, keep your chin on the rest and focus hard on that + in the middle of the screen to give your eyes the best chance of detecting which stripes are not like the others!
===Run the Series===
Determine in script the number of blocks.
Run CueingDL1.m
Follow embedded instruction, specifying that it is NOT practice [n] and set the threshold level at a level based on the performance during practice - somewhere in the range of 15-20 should be ideal.
316qz4h3baczst7qcnrdakdyx4gml50
207
205
2015-12-10T23:44:48Z
Pdonnelly
2
wikitext
text/x-wiki
__TOC__
==Attention: Spatial Cueing==
Spatial Cueing testing takes place in CHDD in Room 370
# Ensure that Linux system is ready - with Chin Rest set-up
# Perform Vision Test
Open Terminal. Type "sudo ptb3-matlab", then enter the password.
MatLab will load. Navigate to the Code directory
cd /.../code
Ensure the script shows the correct monitor "CHDD" and the subject number is correct.
===Introduction===
In this task you are going to play a game, and your goal is to win as many points as possible. This game is a vision game and will test your powers of attention and detection.
===Introduce the Single Stimulus===
The first part of the game looks like this:
[[File:SingleStim3.jpg|800px]]
The stripe patch is going to flash on the screen in one of eight spots around the screen and your job is to say which way the stripes are tilted by pressing the correct button on the keyboard. If they are tilted like this* towards this* corner of the screen, press the Left arrow.
If they are tilted like this* towards the other corner of the screen, press the Right arrow instead.
[[File:singleSmallTilt2.jpg|800px]]
When you're playing the game, you're going to rest your chin here, and focus on the + in the middle of the screen.
*During demonstration, use your hand to show the tilt of the stripes, pointing them toward the corners of the monitor, avoiding descriptions of Left or Right, or Clockwise or Counter-Clockwise
It doesn't matter where the patch of stripes is located on the screen, it only matters which way the stripes are tilted.
If you get it right, you will hear a short ding and see a +3 in green at the center of the screen. This is what it looks like:
[[File:FeedbackPointsCorrect.jpg|800px]]
If you get it wrong, you will hear a different sound and +0 in the middle of the screen in red.
[[File:FeedbackPointsError.jpg|800px]]
Any questions? Ready to try a practice round of this part?
===Practice Single Stimulus===
Follow embedded instructions, specifying that it is practice [y], the single stimulus version [2], the shorter version [n], and initialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Introduce the Cued task===
In the second task, we're going to make it a bit harder. This time there will be 8 of those stripe patches, arranged in a circle and all of them will be the same except one. In this game, we help you out and show a red dot next to the one that will be tilted.
Your job will be to find that red dot, see the stripes that are tilted, and press the correct button. It's important to note that in this game the red dot will ALWAYS be next to the stripes that are tilted and will appear just a split-second before the stripe patches.
This is what it will look like:
[[File:CuedGabor.jpg|800px]]
Or this:
[[File:CuedGabor2.jpg|800px]]
Let's practice this version!
Remember to keep you chin on the rest, and focus on that +!
===Cued Practice===
Continue embedded instructions, specifying that it is another practice [y], the cued version [1], the long/short version, and initialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Uncued task===
In the last part, we make it a bit harder and get rid of that red dot. In this task, your task will be to pay close attention to all of the stripes and find which one is tilted and press the correct button.
In this one, it will be especially important to keep your eyes fixed on that +, so that you can see all of the patches, since you don't know which one is going to be different.
This is what they look like?
[[File:GaborsCloseUp.jpg|800px]]
As you can see all eight of the stripes are the same except for one.
[[File:Gabors2.jpg|800px]]
It doesn't matter which side of the screen the tilted stripes are on, so even if they are on the right side of the screen, you would still press the Left Arrow since the tops of the stripes are pointing toward the Left Corner of the Monitor.
It may seem easy now, but the tricky part of the task is that it gets harder and harder to tell which way the stripes are tilted, to the point that you have to pay extra close attention. Which one is different in this picture?
[[File:SmallTilt.jpg|800px]]
===Uncued Practice===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the uncued version [0], the long version [y], and initialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Additional Practice===
This is a subjective decision following the above practice to add additional practice rounds based on how the subject is feeling and performing on each stimulus type.
Regardless, at minimum, the subject should perform the uncued task with an initial tilt level at 15 or below.
Run a few iterations of the practice rounds, at short duration.
===Block Series===
Now that you're done with the practice and proven yourself an expert, lets put those skills to the test. In the real game, you will play the different tasks in this order: single path, uncued, cued, uncued, cued, single patch. Each game is going to have more of each task and we're going to add a feature to make it a bit harder.
This game is designed so that every time you get the right answer, the next one is going to be a bit harder by making the tilt harder and harder to see*.
*demonstrate with your hand that the tilt angle will get progressively shallower.
The game works the opposite way too - if you get one wrong, the next one is going to be a bit easier.
You might have noticed that when you were doing the practice every time you got a correct answer you saw a +3 in the middle of the screen, and every time you got it wrong you got a +0. In the practice the points didn't matter, but now that we're starting the real game, the challenge is to see how many points you can get!
To win the game, you need to get more than 700 points!
<br>
Remember, keep your chin on the rest and focus hard on that + in the middle of the screen to give your eyes the best chance of detecting which stripes are not like the others!
===Run the Series===
Determine in script the number of blocks.
Run CueingDL1.m
Follow embedded instruction, specifying that it is NOT practice [n] and set the threshold level at a level based on the performance during practice - somewhere in the range of 15-20 should be ideal.
b1ebsjhqsy6bqgxj2is057902n8spqr
215
207
2016-01-08T20:42:47Z
Pdonnelly
2
/* Attention: Spatial Cueing */
wikitext
text/x-wiki
__TOC__
==Attention: Spatial Cueing==
Spatial Cueing testing takes place in CHDD in Room 370
# Ensure that Linux system is ready - with Chin Rest set-up
# Perform Vision Test
Open Terminal. Type "sudo ptb3-matlab", then enter the password.
MatLab will load. Navigate to the Code directory
cd /.../code
Ensure the script shows the correct monitor "CHDD" and the subject number is correct.
===Introduction===
In this task you are going to play a game, and your goal is to win as many points as possible. This game is a vision game and will test your powers of attention and detection.
===Introduce the Single Stimulus===
The first part of the game looks like this:
[[File:SingleStim3.jpg|800px]]
The stripe patch is going to flash on the screen in one of eight spots around the screen and your job is to say which way the stripes are tilted by pressing the correct button on the keyboard. If they are tilted like this* towards this* corner of the screen, press the Left arrow.
If they are tilted like this* towards the other corner of the screen, press the Right arrow instead.
[[File:singleSmallTilt2.jpg|800px]]
When you're playing the game, you're going to rest your chin here, and focus on the + in the middle of the screen.
*During demonstration, use your hand to show the tilt of the stripes, pointing them toward the corners of the monitor, avoiding descriptions of Left or Right, or Clockwise or Counter-Clockwise
It doesn't matter where the patch of stripes is located on the screen, it only matters which way the stripes are tilted.
If you get it right, you will hear a short ding and see a +3 in green at the center of the screen. This is what it looks like:
[[File:FeedbackPointsCorrect.jpg|800px]]
If you get it wrong, you will hear a different sound and +0 in the middle of the screen in red.
[[File:FeedbackPointsError.jpg|800px]]
Any questions? Ready to try a practice round of this part?
===Practice Single Stimulus===
Follow embedded instructions, specifying that it is practice [y], the single stimulus version [2], the shorter version [n], and initialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Uncued task===
In the last part, we make it a bit harder and get rid of that red dot. In this task, your task will be to pay close attention to all of the stripes and find which one is tilted and press the correct button.
In this one, it will be especially important to keep your eyes fixed on that +, so that you can see all of the patches, since you don't know which one is going to be different.
This is what they look like?
[[File:GaborsCloseUp.jpg|800px]]
As you can see all eight of the stripes are the same except for one.
[[File:Gabors2.jpg|800px]]
It doesn't matter which side of the screen the tilted stripes are on, so even if they are on the right side of the screen, you would still press the Left Arrow since the tops of the stripes are pointing toward the Left Corner of the Monitor.
It may seem easy now, but the tricky part of the task is that it gets harder and harder to tell which way the stripes are tilted, to the point that you have to pay extra close attention. Which one is different in this picture?
[[File:SmallTilt.jpg|800px]]
===Uncued Practice===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the uncued version [0], the long version [y], and initialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Introduce the Cued task===
In the second task, we're going to make it a bit harder. This time there will be 8 of those stripe patches, arranged in a circle and all of them will be the same except one. In this game, we help you out and show a red dot next to the one that will be tilted.
Your job will be to find that red dot, see the stripes that are tilted, and press the correct button. It's important to note that in this game the red dot will ALWAYS be next to the stripes that are tilted and will appear just a split-second before the stripe patches.
This is what it will look like:
[[File:CuedGabor.jpg|800px]]
Or this:
[[File:CuedGabor2.jpg|800px]]
Let's practice this version!
Remember to keep you chin on the rest, and focus on that +!
===Cued Practice===
Continue embedded instructions, specifying that it is another practice [y], the cued version [1], the long/short version, and initialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Additional Practice===
This is a subjective decision following the above practice to add additional practice rounds based on how the subject is feeling and performing on each stimulus type.
Regardless, at minimum, the subject should perform the uncued task with an initial tilt level at 15 or below.
Run a few iterations of the practice rounds, at short duration.
===Block Series===
Now that you're done with the practice and proven yourself an expert, lets put those skills to the test. In the real game, you will play the different tasks in this order: single path, uncued, cued, uncued, cued, single patch. Each game is going to have more of each task and we're going to add a feature to make it a bit harder.
This game is designed so that every time you get the right answer, the next one is going to be a bit harder by making the tilt harder and harder to see*.
*demonstrate with your hand that the tilt angle will get progressively shallower.
The game works the opposite way too - if you get one wrong, the next one is going to be a bit easier.
You might have noticed that when you were doing the practice every time you got a correct answer you saw a +3 in the middle of the screen, and every time you got it wrong you got a +0. In the practice the points didn't matter, but now that we're starting the real game, the challenge is to see how many points you can get!
To win the game, you need to get more than 700 points!
<br>
Remember, keep your chin on the rest and focus hard on that + in the middle of the screen to give your eyes the best chance of detecting which stripes are not like the others!
===Run the Series===
Determine in script the number of blocks.
Run CueingDL1.m
Follow embedded instruction, specifying that it is NOT practice [n] and set the threshold level at a level based on the performance during practice - somewhere in the range of 15-20 should be ideal.
fy0tljhav3qq1kdc316lp2v75u5yuym
236
215
2016-01-28T20:07:58Z
Pdonnelly
2
/* Block Series */
wikitext
text/x-wiki
__TOC__
==Attention: Spatial Cueing==
Spatial Cueing testing takes place in CHDD in Room 370
# Ensure that Linux system is ready - with Chin Rest set-up
# Perform Vision Test
Open Terminal. Type "sudo ptb3-matlab", then enter the password.
MatLab will load. Navigate to the Code directory
cd /.../code
Ensure the script shows the correct monitor "CHDD" and the subject number is correct.
===Introduction===
In this task you are going to play a game, and your goal is to win as many points as possible. This game is a vision game and will test your powers of attention and detection.
===Introduce the Single Stimulus===
The first part of the game looks like this:
[[File:SingleStim3.jpg|800px]]
The stripe patch is going to flash on the screen in one of eight spots around the screen and your job is to say which way the stripes are tilted by pressing the correct button on the keyboard. If they are tilted like this* towards this* corner of the screen, press the Left arrow.
If they are tilted like this* towards the other corner of the screen, press the Right arrow instead.
[[File:singleSmallTilt2.jpg|800px]]
When you're playing the game, you're going to rest your chin here, and focus on the + in the middle of the screen.
*During demonstration, use your hand to show the tilt of the stripes, pointing them toward the corners of the monitor, avoiding descriptions of Left or Right, or Clockwise or Counter-Clockwise
It doesn't matter where the patch of stripes is located on the screen, it only matters which way the stripes are tilted.
If you get it right, you will hear a short ding and see a +3 in green at the center of the screen. This is what it looks like:
[[File:FeedbackPointsCorrect.jpg|800px]]
If you get it wrong, you will hear a different sound and +0 in the middle of the screen in red.
[[File:FeedbackPointsError.jpg|800px]]
Any questions? Ready to try a practice round of this part?
===Practice Single Stimulus===
Follow embedded instructions, specifying that it is practice [y], the single stimulus version [2], the shorter version [n], and initialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Uncued task===
In the last part, we make it a bit harder and get rid of that red dot. In this task, your task will be to pay close attention to all of the stripes and find which one is tilted and press the correct button.
In this one, it will be especially important to keep your eyes fixed on that +, so that you can see all of the patches, since you don't know which one is going to be different.
This is what they look like?
[[File:GaborsCloseUp.jpg|800px]]
As you can see all eight of the stripes are the same except for one.
[[File:Gabors2.jpg|800px]]
It doesn't matter which side of the screen the tilted stripes are on, so even if they are on the right side of the screen, you would still press the Left Arrow since the tops of the stripes are pointing toward the Left Corner of the Monitor.
It may seem easy now, but the tricky part of the task is that it gets harder and harder to tell which way the stripes are tilted, to the point that you have to pay extra close attention. Which one is different in this picture?
[[File:SmallTilt.jpg|800px]]
===Uncued Practice===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the uncued version [0], the long version [y], and initialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Introduce the Cued task===
In the second task, we're going to make it a bit harder. This time there will be 8 of those stripe patches, arranged in a circle and all of them will be the same except one. In this game, we help you out and show a red dot next to the one that will be tilted.
Your job will be to find that red dot, see the stripes that are tilted, and press the correct button. It's important to note that in this game the red dot will ALWAYS be next to the stripes that are tilted and will appear just a split-second before the stripe patches.
This is what it will look like:
[[File:CuedGabor.jpg|800px]]
Or this:
[[File:CuedGabor2.jpg|800px]]
Let's practice this version!
Remember to keep you chin on the rest, and focus on that +!
===Cued Practice===
Continue embedded instructions, specifying that it is another practice [y], the cued version [1], the long/short version, and initialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Additional Practice===
This is a subjective decision following the above practice to add additional practice rounds based on how the subject is feeling and performing on each stimulus type.
Regardless, at minimum, the subject should perform the uncued task with an initial tilt level at 15 or below.
Run a few iterations of the practice rounds, at short duration.
===Block Series===
Now that you're done with the practice and proven yourself an expert, lets put those skills to the test. In the real game, you will play the different tasks in this order: single path, uncued, cued, uncued, cued, single patch. Each game is going to have more of each task and we're going to add a feature to make it a bit harder.
This game is designed so that every time you get the right answer, the next one is going to be a bit harder by making the tilt harder and harder to see*.
*demonstrate with your hand that the tilt angle will get progressively shallower.
The game works the opposite way too - if you get one wrong, the next one is going to be a bit easier.
You might have noticed that when you were doing the practice every time you got a correct answer you saw a +3 in the middle of the screen, and every time you got it wrong you got a +0. In the practice the points didn't matter, but now that we're starting the real game, the challenge is to see how many points you can get!
To win the game, you need to get more than 700 points!
The stripe patches show up quickly, but this is not a game of speed! You can take your time and there's no rush to respond. Don't take too long though - you might forget what you saw!
<br>
Remember, keep your chin on the rest and focus hard on that + in the middle of the screen to give your eyes the best chance of detecting which stripes are not like the others!
===Run the Series===
Determine in script the number of blocks.
Run CueingDL1.m
Follow embedded instruction, specifying that it is NOT practice [n] and set the threshold level at a level based on the performance during practice - somewhere in the range of 15-20 should be ideal.
jczec9p4iks6ye63housvy7znfxsmi9
277
236
2016-05-20T23:03:35Z
Pdonnelly
2
wikitext
text/x-wiki
__TOC__
==Attention: Spatial Cueing==
Spatial Cueing testing takes place in CHDD in Room 370
# Ensure that Linux system is ready - with Chin Rest set-up
# Perform Vision Test
Open Terminal. Type "sudo ptb3-matlab", then enter the password.
MatLab will load. Navigate to the Code directory
cd /.../code
Ensure the script shows the correct monitor "CHDD" and the subject number is correct.
===Introduction===
In this task you are going to play a game, and your goal is to win as many points as possible. This game is a vision game and will test your powers of attention and detection.
===Introduce the Single Stimulus===
The first part of the game looks like this:
[[File:SingleStim3.jpg|800px]]
The stripe patch is going to flash on the screen in one of eight spots around the screen and your job is to say which way the stripes are tilted by pressing the correct button on the keyboard. If they are tilted like this* towards this* corner of the screen, press the Left arrow.
If they are tilted like this* towards the other corner of the screen, press the Right arrow instead.
[[File:singleSmallTilt2.jpg|800px]]
When you're playing the game, you're going to rest your chin here, and focus on the + in the middle of the screen.
*During demonstration, use your hand to show the tilt of the stripes, pointing them toward the corners of the monitor, avoiding descriptions of Left or Right, or Clockwise or Counter-Clockwise
It doesn't matter where the patch of stripes is located on the screen, it only matters which way the stripes are tilted.
If you get it right, you will hear a short ding and see a +3 in green at the center of the screen. This is what it looks like:
[[File:FeedbackPointsCorrect.jpg|800px]]
If you get it wrong, you will hear a different sound and +0 in the middle of the screen in red.
[[File:FeedbackPointsError.jpg|800px]]
Any questions? Ready to try a practice round of this part?
===Practice Single Stimulus===
Follow embedded instructions, specifying that it is practice [y], the single stimulus version [2], the shorter version [n], and initialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Uncued task===
In the last part, we make it a bit harder and get rid of that red dot. In this task, your task will be to pay close attention to all of the stripes and find which one is tilted and press the correct button.
In this one, it will be especially important to keep your eyes fixed on that +, so that you can see all of the patches, since you don't know which one is going to be different.
This is what they look like?
[[File:GaborsCloseUp.jpg|800px]]
As you can see all eight of the stripes are the same except for one.
[[File:Gabors2.jpg|800px]]
It doesn't matter which side of the screen the tilted stripes are on, so even if they are on the right side of the screen, you would still press the Left Arrow since the tops of the stripes are pointing toward the Left Corner of the Monitor.
It may seem easy now, but the tricky part of the task is that it gets harder and harder to tell which way the stripes are tilted, to the point that you have to pay extra close attention. Which one is different in this picture?
[[File:SmallTilt.jpg|800px]]
===Uncued Practice===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the uncued version [0], the long version [y], and initialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Introduce the Cued task===
In the second task, we're going to make it a bit harder. This time there will be 8 of those stripe patches, arranged in a circle and all of them will be the same except one. In this game, we help you out and show a red dot next to the one that will be tilted.
Your job will be to find that red dot, see the stripes that are tilted, and press the correct button. It's important to note that in this game the red dot will ALWAYS be next to the stripes that are tilted and will appear just a split-second before the stripe patches.
This is what it will look like:
[[File:CuedGabor.jpg|800px]]
Or this:
[[File:CuedGabor2.jpg|800px]]
Let's practice this version!
Remember to keep you chin on the rest, and focus on that +!
===Cued Practice===
Continue embedded instructions, specifying that it is another practice [y], the cued version [1], the long/short version, and initialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Additional Practice===
This is a subjective decision following the above practice to add additional practice rounds based on how the subject is feeling and performing on each stimulus type.
Regardless, at minimum, the subject should perform the uncued task with an initial tilt level at 15 or below.
Run a few iterations of the practice rounds, at short duration.
===Block Series===
Now that you're done with the practice and proven yourself an expert, lets put those skills to the test. In the real game, you will play the different tasks in this order: single path, uncued, cued, uncued, cued, single patch. Each game is going to have more of each task and we're going to add a feature to make it a bit harder.
This game is designed so that every time you get the right answer, the next one is going to be a bit harder by making the tilt harder and harder to see*.
*demonstrate with your hand that the tilt angle will get progressively shallower.
The game works the opposite way too - if you get one wrong, the next one is going to be a bit easier.
You might have noticed that when you were doing the practice every time you got a correct answer you saw a +3 in the middle of the screen, and every time you got it wrong you got a +0. In the practice the points didn't matter, but now that we're starting the real game, the challenge is to see how many points you can get!
To win the game, you need to get more than 700 points!
The stripe patches show up quickly, but this is not a game of speed! You can take your time and there's no rush to respond. Don't take too long though - you might forget what you saw!
<br>
Remember, keep your chin on the rest and focus hard on that + in the middle of the screen to give your eyes the best chance of detecting which stripes are not like the others!
===Run the Series===
Determine in script the number of blocks.
Run CueingDL1.m
Follow embedded instruction, specifying that it is NOT practice [n] and set the threshold level at a level based on the performance during practice - somewhere in the range of 15-20 should be ideal.
===Version 3===
In version 3, there are now 8 blocks: 2 single stimulus, 2 uncued, 2 with a red cue, and 2 with a small black R&H cue.
This is what the small black R&H cue looks like:
j4ab0s2dsrwm6u52grwbodutqay5cc8
280
277
2016-05-20T23:10:44Z
Pdonnelly
2
wikitext
text/x-wiki
__TOC__
==Attention: Spatial Cueing==
Spatial Cueing testing takes place in CHDD in Room 370
# Ensure that Linux system is ready - with Chin Rest set-up
# Perform Vision Test
Open Terminal. Type "sudo ptb3-matlab", then enter the password.
MatLab will load. Navigate to the Code directory
cd /.../code
Ensure the script shows the correct monitor "CHDD" and the subject number is correct.
===Introduction===
In this task you are going to play a game, and your goal is to win as many points as possible. This game is a vision game and will test your powers of attention and detection.
===Introduce the Single Stimulus===
The first part of the game looks like this:
[[File:SingleStim3.jpg|800px]]
The stripe patch is going to flash on the screen in one of eight spots around the screen and your job is to say which way the stripes are tilted by pressing the correct button on the keyboard. If they are tilted like this* towards this* corner of the screen, press the Left arrow.
If they are tilted like this* towards the other corner of the screen, press the Right arrow instead.
[[File:singleSmallTilt2.jpg|800px]]
When you're playing the game, you're going to rest your chin here, and focus on the + in the middle of the screen.
*During demonstration, use your hand to show the tilt of the stripes, pointing them toward the corners of the monitor, avoiding descriptions of Left or Right, or Clockwise or Counter-Clockwise
It doesn't matter where the patch of stripes is located on the screen, it only matters which way the stripes are tilted.
If you get it right, you will hear a short ding and see a +3 in green at the center of the screen. This is what it looks like:
[[File:FeedbackPointsCorrect.jpg|800px]]
If you get it wrong, you will hear a different sound and +0 in the middle of the screen in red.
[[File:FeedbackPointsError.jpg|800px]]
Any questions? Ready to try a practice round of this part?
===Practice Single Stimulus===
Follow embedded instructions, specifying that it is practice [y], the single stimulus version [2], the shorter version [n], and initialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Uncued task===
In the last part, we make it a bit harder and get rid of that red dot. In this task, your task will be to pay close attention to all of the stripes and find which one is tilted and press the correct button.
In this one, it will be especially important to keep your eyes fixed on that +, so that you can see all of the patches, since you don't know which one is going to be different.
This is what they look like?
[[File:GaborsCloseUp.jpg|800px]]
As you can see all eight of the stripes are the same except for one.
[[File:Gabors2.jpg|800px]]
It doesn't matter which side of the screen the tilted stripes are on, so even if they are on the right side of the screen, you would still press the Left Arrow since the tops of the stripes are pointing toward the Left Corner of the Monitor.
It may seem easy now, but the tricky part of the task is that it gets harder and harder to tell which way the stripes are tilted, to the point that you have to pay extra close attention. Which one is different in this picture?
[[File:SmallTilt.jpg|800px]]
===Uncued Practice===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the uncued version [0], the long version [y], and initialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Introduce the Cued task===
In the second task, we're going to make it a bit harder. This time there will be 8 of those stripe patches, arranged in a circle and all of them will be the same except one. In this game, we help you out and show a red dot next to the one that will be tilted.
Your job will be to find that red dot, see the stripes that are tilted, and press the correct button. It's important to note that in this game the red dot will ALWAYS be next to the stripes that are tilted and will appear just a split-second before the stripe patches.
This is what it will look like:
[[File:CuedGabor.jpg|800px]]
Or this:
[[File:CuedGabor2.jpg|800px]]
Let's practice this version!
Remember to keep you chin on the rest, and focus on that +!
===Cued Practice===
Continue embedded instructions, specifying that it is another practice [y], the cued version [1], the long/short version, and initialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Additional Practice===
This is a subjective decision following the above practice to add additional practice rounds based on how the subject is feeling and performing on each stimulus type.
Regardless, at minimum, the subject should perform the uncued task with an initial tilt level at 15 or below.
Run a few iterations of the practice rounds, at short duration.
===Block Series===
Now that you're done with the practice and proven yourself an expert, lets put those skills to the test. In the real game, you will play the different tasks in this order: single path, uncued, cued, uncued, cued, single patch. Each game is going to have more of each task and we're going to add a feature to make it a bit harder.
This game is designed so that every time you get the right answer, the next one is going to be a bit harder by making the tilt harder and harder to see*.
*demonstrate with your hand that the tilt angle will get progressively shallower.
The game works the opposite way too - if you get one wrong, the next one is going to be a bit easier.
You might have noticed that when you were doing the practice every time you got a correct answer you saw a +3 in the middle of the screen, and every time you got it wrong you got a +0. In the practice the points didn't matter, but now that we're starting the real game, the challenge is to see how many points you can get!
To win the game, you need to get more than 700 points!
The stripe patches show up quickly, but this is not a game of speed! You can take your time and there's no rush to respond. Don't take too long though - you might forget what you saw!
<br>
Remember, keep your chin on the rest and focus hard on that + in the middle of the screen to give your eyes the best chance of detecting which stripes are not like the others!
===Run the Series===
Determine in script the number of blocks.
Run CueingDL1.m
Follow embedded instruction, specifying that it is NOT practice [n] and set the threshold level at a level based on the performance during practice - somewhere in the range of 15-20 should be ideal.
===Version 3===
In version 3, there are now 8 blocks: 2 single stimulus, 2 uncued, 2 with a red cue, and 2 with a small black R&H cue.
This is what the small black R&H cue looks like:
[[File:V3SmCueBot.jpg|800px]]
5h7gx2n7rn8nxkjsf92minibstlbmrf
281
280
2016-05-20T23:12:59Z
Pdonnelly
2
wikitext
text/x-wiki
__TOC__
==Attention: Spatial Cueing==
Spatial Cueing testing takes place in CHDD in Room 370
# Ensure that Linux system is ready - with Chin Rest set-up
# Perform Vision Test
Open Terminal. Type "sudo ptb3-matlab", then enter the password.
MatLab will load. Navigate to the Code directory
cd /.../code
Ensure the script shows the correct monitor "CHDD" and the subject number is correct.
===Introduction===
In this task you are going to play a game, and your goal is to win as many points as possible. This game is a vision game and will test your powers of attention and detection.
===Introduce the Single Stimulus===
The first part of the game looks like this:
[[File:SingleStim3.jpg|800px]]
The stripe patch is going to flash on the screen in one of eight spots around the screen and your job is to say which way the stripes are tilted by pressing the correct button on the keyboard. If they are tilted like this* towards this* corner of the screen, press the Left arrow.
If they are tilted like this* towards the other corner of the screen, press the Right arrow instead.
[[File:singleSmallTilt2.jpg|800px]]
When you're playing the game, you're going to rest your chin here, and focus on the + in the middle of the screen.
*During demonstration, use your hand to show the tilt of the stripes, pointing them toward the corners of the monitor, avoiding descriptions of Left or Right, or Clockwise or Counter-Clockwise
It doesn't matter where the patch of stripes is located on the screen, it only matters which way the stripes are tilted.
If you get it right, you will hear a short ding and see a +3 in green at the center of the screen. This is what it looks like:
[[File:FeedbackPointsCorrect.jpg|800px]]
If you get it wrong, you will hear a different sound and +0 in the middle of the screen in red.
[[File:FeedbackPointsError.jpg|800px]]
Any questions? Ready to try a practice round of this part?
===Practice Single Stimulus===
Follow embedded instructions, specifying that it is practice [y], the single stimulus version [2], the shorter version [n], and initialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Uncued task===
In the last part, we make it a bit harder and get rid of that red dot. In this task, your task will be to pay close attention to all of the stripes and find which one is tilted and press the correct button.
In this one, it will be especially important to keep your eyes fixed on that +, so that you can see all of the patches, since you don't know which one is going to be different.
This is what they look like?
[[File:GaborsCloseUp.jpg|800px]]
As you can see all eight of the stripes are the same except for one.
[[File:Gabors2.jpg|800px]]
It doesn't matter which side of the screen the tilted stripes are on, so even if they are on the right side of the screen, you would still press the Left Arrow since the tops of the stripes are pointing toward the Left Corner of the Monitor.
It may seem easy now, but the tricky part of the task is that it gets harder and harder to tell which way the stripes are tilted, to the point that you have to pay extra close attention. Which one is different in this picture?
[[File:SmallTilt.jpg|800px]]
===Uncued Practice===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the uncued version [0], the long version [y], and initialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Introduce the Cued task===
In the second task, we're going to make it a bit harder. This time there will be 8 of those stripe patches, arranged in a circle and all of them will be the same except one. In this game, we help you out and show a red dot next to the one that will be tilted.
Your job will be to find that red dot, see the stripes that are tilted, and press the correct button. It's important to note that in this game the red dot will ALWAYS be next to the stripes that are tilted and will appear just a split-second before the stripe patches.
This is what it will look like:
[[File:CuedGabor.jpg|800px]]
Or this:
[[File:CuedGabor2.jpg|800px]]
Let's practice this version!
Remember to keep you chin on the rest, and focus on that +!
===Cued Practice===
Continue embedded instructions, specifying that it is another practice [y], the cued version [1], the long/short version, and initialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Additional Practice===
This is a subjective decision following the above practice to add additional practice rounds based on how the subject is feeling and performing on each stimulus type.
Regardless, at minimum, the subject should perform the uncued task with an initial tilt level at 15 or below.
Run a few iterations of the practice rounds, at short duration.
===Block Series===
Now that you're done with the practice and proven yourself an expert, lets put those skills to the test. In the real game, you will play the different tasks in this order: single path, uncued, cued, uncued, cued, single patch. Each game is going to have more of each task and we're going to add a feature to make it a bit harder.
This game is designed so that every time you get the right answer, the next one is going to be a bit harder by making the tilt harder and harder to see*.
*demonstrate with your hand that the tilt angle will get progressively shallower.
The game works the opposite way too - if you get one wrong, the next one is going to be a bit easier.
You might have noticed that when you were doing the practice every time you got a correct answer you saw a +3 in the middle of the screen, and every time you got it wrong you got a +0. In the practice the points didn't matter, but now that we're starting the real game, the challenge is to see how many points you can get!
To win the game, you need to get more than 700 points!
The stripe patches show up quickly, but this is not a game of speed! You can take your time and there's no rush to respond. Don't take too long though - you might forget what you saw!
<br>
Remember, keep your chin on the rest and focus hard on that + in the middle of the screen to give your eyes the best chance of detecting which stripes are not like the others!
===Run the Series===
Determine in script the number of blocks.
Run CueingDL1.m
Follow embedded instruction, specifying that it is NOT practice [n] and set the threshold level at a level based on the performance during practice - somewhere in the range of 15-20 should be ideal.
===Version 3===
In version 3, there are now 8 blocks: 2 single stimulus, 2 uncued, 2 with a red cue, and 2 with a small black R&H cue.
This is what the small black R&H cue looks like:
[[File:V3SmCueBot.png|800px]]
3f84bvpk9o2sw236v4jclebihm892mc
283
281
2016-06-06T22:47:10Z
Alexwhite
3
/* Version 3 */
wikitext
text/x-wiki
__TOC__
==Attention: Spatial Cueing==
Spatial Cueing testing takes place in CHDD in Room 370
# Ensure that Linux system is ready - with Chin Rest set-up
# Perform Vision Test
Open Terminal. Type "sudo ptb3-matlab", then enter the password.
MatLab will load. Navigate to the Code directory
cd /.../code
Ensure the script shows the correct monitor "CHDD" and the subject number is correct.
===Introduction===
In this task you are going to play a game, and your goal is to win as many points as possible. This game is a vision game and will test your powers of attention and detection.
===Introduce the Single Stimulus===
The first part of the game looks like this:
[[File:SingleStim3.jpg|800px]]
The stripe patch is going to flash on the screen in one of eight spots around the screen and your job is to say which way the stripes are tilted by pressing the correct button on the keyboard. If they are tilted like this* towards this* corner of the screen, press the Left arrow.
If they are tilted like this* towards the other corner of the screen, press the Right arrow instead.
[[File:singleSmallTilt2.jpg|800px]]
When you're playing the game, you're going to rest your chin here, and focus on the + in the middle of the screen.
*During demonstration, use your hand to show the tilt of the stripes, pointing them toward the corners of the monitor, avoiding descriptions of Left or Right, or Clockwise or Counter-Clockwise
It doesn't matter where the patch of stripes is located on the screen, it only matters which way the stripes are tilted.
If you get it right, you will hear a short ding and see a +3 in green at the center of the screen. This is what it looks like:
[[File:FeedbackPointsCorrect.jpg|800px]]
If you get it wrong, you will hear a different sound and +0 in the middle of the screen in red.
[[File:FeedbackPointsError.jpg|800px]]
Any questions? Ready to try a practice round of this part?
===Practice Single Stimulus===
Follow embedded instructions, specifying that it is practice [y], the single stimulus version [2], the shorter version [n], and initialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Uncued task===
In the last part, we make it a bit harder and get rid of that red dot. In this task, your task will be to pay close attention to all of the stripes and find which one is tilted and press the correct button.
In this one, it will be especially important to keep your eyes fixed on that +, so that you can see all of the patches, since you don't know which one is going to be different.
This is what they look like?
[[File:GaborsCloseUp.jpg|800px]]
As you can see all eight of the stripes are the same except for one.
[[File:Gabors2.jpg|800px]]
It doesn't matter which side of the screen the tilted stripes are on, so even if they are on the right side of the screen, you would still press the Left Arrow since the tops of the stripes are pointing toward the Left Corner of the Monitor.
It may seem easy now, but the tricky part of the task is that it gets harder and harder to tell which way the stripes are tilted, to the point that you have to pay extra close attention. Which one is different in this picture?
[[File:SmallTilt.jpg|800px]]
===Uncued Practice===
Run CueingDL1.m
Follow embedded instructions, specifying that it is practice [y], the uncued version [0], the long version [y], and initialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Introduce the Cued task===
In the second task, we're going to make it a bit harder. This time there will be 8 of those stripe patches, arranged in a circle and all of them will be the same except one. In this game, we help you out and show a red dot next to the one that will be tilted.
Your job will be to find that red dot, see the stripes that are tilted, and press the correct button. It's important to note that in this game the red dot will ALWAYS be next to the stripes that are tilted and will appear just a split-second before the stripe patches.
This is what it will look like:
[[File:CuedGabor.jpg|800px]]
Or this:
[[File:CuedGabor2.jpg|800px]]
Let's practice this version!
Remember to keep you chin on the rest, and focus on that +!
===Cued Practice===
Continue embedded instructions, specifying that it is another practice [y], the cued version [1], the long/short version, and initialize the tilt level in degrees [ex. 30]
NOTE: If you need to exit the task at any time, press the 'Q' key instead of a Left/Right Arrow response
===Additional Practice===
This is a subjective decision following the above practice to add additional practice rounds based on how the subject is feeling and performing on each stimulus type.
Regardless, at minimum, the subject should perform the uncued task with an initial tilt level at 15 or below.
Run a few iterations of the practice rounds, at short duration.
===Block Series===
Now that you're done with the practice and proven yourself an expert, lets put those skills to the test. In the real game, you will play the different tasks in this order: single path, uncued, cued, uncued, cued, single patch. Each game is going to have more of each task and we're going to add a feature to make it a bit harder.
This game is designed so that every time you get the right answer, the next one is going to be a bit harder by making the tilt harder and harder to see*.
*demonstrate with your hand that the tilt angle will get progressively shallower.
The game works the opposite way too - if you get one wrong, the next one is going to be a bit easier.
You might have noticed that when you were doing the practice every time you got a correct answer you saw a +3 in the middle of the screen, and every time you got it wrong you got a +0. In the practice the points didn't matter, but now that we're starting the real game, the challenge is to see how many points you can get!
To win the game, you need to get more than 700 points!
The stripe patches show up quickly, but this is not a game of speed! You can take your time and there's no rush to respond. Don't take too long though - you might forget what you saw!
<br>
Remember, keep your chin on the rest and focus hard on that + in the middle of the screen to give your eyes the best chance of detecting which stripes are not like the others!
===Run the Series===
Determine in script the number of blocks.
Run CueingDL1.m
Follow embedded instruction, specifying that it is NOT practice [n] and set the threshold level at a level based on the performance during practice - somewhere in the range of 15-20 should be ideal.
===Version 3===
In version 3, there are now 8 blocks: 2 single stimulus, 2 uncued, 2 with a red cue, and 2 with a small black R&H cue.
This is what the small black R&H cue looks like:
[[File:V3SmCueBotCrop.png |800px]]
5je389117y8zarp2yloqzc1fsep1su0
Reading Instruction
0
31
182
2015-12-02T00:31:26Z
Jyeatman
1
Created page with "__TOC__ ==What Works Clearinghous== A great resource evaluating scientific evidence for various education programs. http://ies.ed.gov/ncee/wwc/default.aspx"
wikitext
text/x-wiki
__TOC__
==What Works Clearinghous==
A great resource evaluating scientific evidence for various education programs.
http://ies.ed.gov/ncee/wwc/default.aspx
s2pi755w3008w9b61mpqkdtg970r04i
188
182
2015-12-02T22:33:01Z
Pdonnelly
2
/* What Works Clearinghous */
wikitext
text/x-wiki
__TOC__
==What Works Clearinghouse==
A great resource evaluating scientific evidence for various education programs.
http://ies.ed.gov/ncee/wwc/default.aspx
qnu9x8jhssntvu32zqf8vbuv9r95e0o
Software Setup
0
17
124
2015-11-03T01:29:31Z
Jyeatman
1
Created page with "We rely on a number of different software packages to analyze MRI and MEG data. Here is a growing tally of what we use and where to get it. Most of these tools reside on githu..."
wikitext
text/x-wiki
We rely on a number of different software packages to analyze MRI and MEG data. Here is a growing tally of what we use and where to get it. Most of these tools reside on github and you should definitely use git to clone the repositories rather than download a snapshot of the code. e.g.
git clone https://github.com/yeatmanlab/mritools.git
==Yeatman Lab Tools==
https://github.com/yeatmanlab/mritools
==Vistasoft==
MATLAB based toolbox, from Brian Wandell's lab at Stanford, that contains many functions we rely on for analyzing diffusion MRI and functional MRI data
https://github.com/vistalab/vistasoft
==Automated Fiber Quantification==
https://github.com/jyeatman/AFQ
==nibabel==
Python based toolbox for dealing with nifti images. While nibabel is on github we suggest installing using apt-get:
apt-get update
apt-get install python-nibabel
or pip
pip install nibabel
pvp6z30cd4dl1skzaq6j0lqjlympcaw
126
124
2015-11-03T01:30:51Z
Jyeatman
1
wikitext
text/x-wiki
We rely on a number of different software packages to analyze MRI and MEG data. Here is a growing tally of what we use and where to get it. Most of these tools reside on github and you should definitely use git to clone the repositories rather than download a snapshot of the code. e.g.
git clone https://github.com/yeatmanlab/mritools.git
==Yeatman Lab Tools==
https://github.com/yeatmanlab/mritools
==Vistasoft==
MATLAB based toolbox, from Brian Wandell's lab at Stanford, that contains many functions we rely on for analyzing diffusion MRI and functional MRI data
https://github.com/vistalab/vistasoft
==Automated Fiber Quantification==
https://github.com/jyeatman/AFQ
==nibabel==
Python based toolbox for dealing with nifti images. While nibabel is on github we suggest installing using apt-get:
apt-get update
apt-get install python-nibabel
or pip
pip install nibabel
cgzi8hynnlnaqyxyo35b974pe20hs8e
127
126
2015-11-03T20:07:50Z
Jyeatman
1
/* nibabel */
wikitext
text/x-wiki
We rely on a number of different software packages to analyze MRI and MEG data. Here is a growing tally of what we use and where to get it. Most of these tools reside on github and you should definitely use git to clone the repositories rather than download a snapshot of the code. e.g.
git clone https://github.com/yeatmanlab/mritools.git
==Yeatman Lab Tools==
https://github.com/yeatmanlab/mritools
==Vistasoft==
MATLAB based toolbox, from Brian Wandell's lab at Stanford, that contains many functions we rely on for analyzing diffusion MRI and functional MRI data
https://github.com/vistalab/vistasoft
==Automated Fiber Quantification==
https://github.com/jyeatman/AFQ
==nibabel==
Python based toolbox for dealing with nifti images. While nibabel is on github we suggest installing using pip:
pip install nibabel
1g7tiupooxycunic7t4rd35jmtx6i9a
128
127
2015-11-03T20:09:11Z
Jyeatman
1
wikitext
text/x-wiki
We rely on a number of different software packages to analyze MRI and MEG data. Here is a growing tally of what we use and where to get it. Most of these tools reside on github and you should definitely use git to clone the repositories rather than download a snapshot of the code. e.g.
git clone https://github.com/yeatmanlab/mritools.git
==Yeatman Lab Tools==
https://github.com/yeatmanlab/mritools
==Vistasoft==
MATLAB based toolbox, from Brian Wandell's lab at Stanford, that contains many functions we rely on for analyzing diffusion MRI and functional MRI data
https://github.com/vistalab/vistasoft
==Automated Fiber Quantification==
https://github.com/jyeatman/AFQ
==Anaconda==
Each user should have anaconda set up to manage their Python packages. See instructions here:
https://www.continuum.io/downloads
==nibabel==
Python based toolbox for dealing with nifti images. While nibabel is on github we suggest installing using pip:
pip install nibabel
9pvqj4klg904vyunf4bmt30luejcoyv
212
128
2015-12-17T22:56:36Z
Jyeatman
1
wikitext
text/x-wiki
We rely on a number of different software packages to analyze MRI and MEG data. Here is a growing tally of what we use and where to get it. Most of these tools reside on github and you should definitely use git to clone the repositories rather than download a snapshot of the code. e.g.
git clone https://github.com/yeatmanlab/mritools.git
==Yeatman Lab Tools==
git clone https://github.com/yeatmanlab/BrainTools.git
==Vistasoft==
MATLAB based toolbox, from Brian Wandell's lab at Stanford, that contains many functions we rely on for analyzing diffusion MRI and functional MRI data
git clone https://github.com/vistalab/vistasoft.git
==Automated Fiber Quantification==
git clone https://github.com/jyeatman/AFQ.git
==Anaconda==
Each user should have anaconda set up to manage their Python packages. See instructions here:
https://www.continuum.io/downloads
==nibabel==
Python based toolbox for dealing with nifti images. While nibabel is on github we suggest installing using pip:
pip install nibabel
htkxs63s37wutd30wzq8z3gwflm4quk
251
212
2016-03-02T22:36:17Z
Jyeatman
1
wikitext
text/x-wiki
We rely on a number of different software packages to analyze MRI and MEG data. Here is a growing tally of what we use and where to get it. Most of these tools reside on github and you should definitely use git to clone the repositories rather than download a snapshot of the code.
==Yeatman Lab Tools==
git clone https://github.com/yeatmanlab/BrainTools.git
==Vistasoft==
MATLAB based toolbox, from Brian Wandell's lab at Stanford, that contains many functions we rely on for analyzing diffusion MRI and functional MRI data
git clone https://github.com/vistalab/vistasoft.git
==Automated Fiber Quantification==
git clone https://github.com/jyeatman/AFQ.git
==Anaconda==
Each user should have anaconda set up to manage their Python packages. See instructions here:
https://www.continuum.io/downloads
==nibabel==
Python based toolbox for dealing with nifti images. While nibabel is on github we suggest installing using pip:
pip install nibabel
==Neurodebian repo==
wget -O- http://neuro.debian.net/lists/trusty.us-nh.full | sudo tee /etc/apt/sources.list.d/neurodebian.sources.list
sudo apt-key adv --recv-keys --keyserver hkp://pgp.mit.edu:80 0xA5D32F012649A5A9
94xxzpuc8qccadg1co4qdu46xr008rf
252
251
2016-03-02T22:41:21Z
Jyeatman
1
/* Neurodebian repo */
wikitext
text/x-wiki
We rely on a number of different software packages to analyze MRI and MEG data. Here is a growing tally of what we use and where to get it. Most of these tools reside on github and you should definitely use git to clone the repositories rather than download a snapshot of the code.
==Yeatman Lab Tools==
git clone https://github.com/yeatmanlab/BrainTools.git
==Vistasoft==
MATLAB based toolbox, from Brian Wandell's lab at Stanford, that contains many functions we rely on for analyzing diffusion MRI and functional MRI data
git clone https://github.com/vistalab/vistasoft.git
==Automated Fiber Quantification==
git clone https://github.com/jyeatman/AFQ.git
==Anaconda==
Each user should have anaconda set up to manage their Python packages. See instructions here:
https://www.continuum.io/downloads
==nibabel==
Python based toolbox for dealing with nifti images. While nibabel is on github we suggest installing using pip:
pip install nibabel
==Neurodebian repo==
The neurodebian project is an incredible resource making it easy to install most of the widely used nueroimaging software packages.
wget -O- http://neuro.debian.net/lists/trusty.us-nh.full | sudo tee /etc/apt/sources.list.d/neurodebian.sources.list
sudo apt-key adv --recv-keys --keyserver hkp://pgp.mit.edu:80 0xA5D32F012649A5A9
sudo apt-get update
o8tsadc3e2f6srqpzf66vgwvh8le03p
253
252
2016-03-02T22:42:20Z
Jyeatman
1
wikitext
text/x-wiki
We rely on a number of different software packages to analyze MRI and MEG data. Here is a growing tally of what we use and where to get it. Most of these tools reside on github and you should definitely use git to clone the repositories rather than download a snapshot of the code.
==Yeatman Lab Tools==
git clone https://github.com/yeatmanlab/BrainTools.git
==Vistasoft==
MATLAB based toolbox, from Brian Wandell's lab at Stanford, that contains many functions we rely on for analyzing diffusion MRI and functional MRI data
git clone https://github.com/vistalab/vistasoft.git
==Automated Fiber Quantification==
git clone https://github.com/jyeatman/AFQ.git
==Anaconda==
Each user should have anaconda set up to manage their Python packages. See instructions here:
https://www.continuum.io/downloads
==nibabel==
Python based toolbox for dealing with nifti images. While nibabel is on github we suggest installing using pip:
pip install nibabel
==Neurodebian repo==
The neurodebian project is an incredible resource making it easy to install most of the widely used nueroimaging software packages.
wget -O- http://neuro.debian.net/lists/trusty.us-nh.full | sudo tee /etc/apt/sources.list.d/neurodebian.sources.list
sudo apt-key adv --recv-keys --keyserver hkp://pgp.mit.edu:80 0xA5D32F012649A5A9
sudo apt-get update
==FSL==
Once you have added the neurodebian repos you can easily install fsl and other packages
sudo apt-get instal fsl-5.0-complete
d8ws2a0vz2voauvn7je3tyurybuwcgs
254
253
2016-03-02T22:47:47Z
Jyeatman
1
/* FSL */
wikitext
text/x-wiki
We rely on a number of different software packages to analyze MRI and MEG data. Here is a growing tally of what we use and where to get it. Most of these tools reside on github and you should definitely use git to clone the repositories rather than download a snapshot of the code.
==Yeatman Lab Tools==
git clone https://github.com/yeatmanlab/BrainTools.git
==Vistasoft==
MATLAB based toolbox, from Brian Wandell's lab at Stanford, that contains many functions we rely on for analyzing diffusion MRI and functional MRI data
git clone https://github.com/vistalab/vistasoft.git
==Automated Fiber Quantification==
git clone https://github.com/jyeatman/AFQ.git
==Anaconda==
Each user should have anaconda set up to manage their Python packages. See instructions here:
https://www.continuum.io/downloads
==nibabel==
Python based toolbox for dealing with nifti images. While nibabel is on github we suggest installing using pip:
pip install nibabel
==Neurodebian repo==
The neurodebian project is an incredible resource making it easy to install most of the widely used nueroimaging software packages.
wget -O- http://neuro.debian.net/lists/trusty.us-nh.full | sudo tee /etc/apt/sources.list.d/neurodebian.sources.list
sudo apt-key adv --recv-keys --keyserver hkp://pgp.mit.edu:80 0xA5D32F012649A5A9
sudo apt-get update
==FSL, ANTS and other Neurodebian packages==
Once you have added the neurodebian repos you can easily install fsl and other packages
sudo apt-get instal fsl-5.0-complete
sudo apt-get install ants
ev502oin49opd1mozwok8p1v5s42trp
255
254
2016-03-07T19:56:20Z
Ehuber
4
/* FSL, ANTS and other Neurodebian packages */
wikitext
text/x-wiki
We rely on a number of different software packages to analyze MRI and MEG data. Here is a growing tally of what we use and where to get it. Most of these tools reside on github and you should definitely use git to clone the repositories rather than download a snapshot of the code.
==Yeatman Lab Tools==
git clone https://github.com/yeatmanlab/BrainTools.git
==Vistasoft==
MATLAB based toolbox, from Brian Wandell's lab at Stanford, that contains many functions we rely on for analyzing diffusion MRI and functional MRI data
git clone https://github.com/vistalab/vistasoft.git
==Automated Fiber Quantification==
git clone https://github.com/jyeatman/AFQ.git
==Anaconda==
Each user should have anaconda set up to manage their Python packages. See instructions here:
https://www.continuum.io/downloads
==nibabel==
Python based toolbox for dealing with nifti images. While nibabel is on github we suggest installing using pip:
pip install nibabel
==Neurodebian repo==
The neurodebian project is an incredible resource making it easy to install most of the widely used nueroimaging software packages.
wget -O- http://neuro.debian.net/lists/trusty.us-nh.full | sudo tee /etc/apt/sources.list.d/neurodebian.sources.list
sudo apt-key adv --recv-keys --keyserver hkp://pgp.mit.edu:80 0xA5D32F012649A5A9
sudo apt-get update
==FSL, ANTS and other Neurodebian packages==
Once you have added the neurodebian repos you can easily install fsl and other packages
sudo apt-get install fsl-5.0-complete
sudo apt-get install ants
7c68zbzajsm55oc3d034ge9ttoowm43
256
255
2016-03-09T00:20:48Z
Ehuber
4
/* FSL, ANTS and other Neurodebian packages */
wikitext
text/x-wiki
We rely on a number of different software packages to analyze MRI and MEG data. Here is a growing tally of what we use and where to get it. Most of these tools reside on github and you should definitely use git to clone the repositories rather than download a snapshot of the code.
==Yeatman Lab Tools==
git clone https://github.com/yeatmanlab/BrainTools.git
==Vistasoft==
MATLAB based toolbox, from Brian Wandell's lab at Stanford, that contains many functions we rely on for analyzing diffusion MRI and functional MRI data
git clone https://github.com/vistalab/vistasoft.git
==Automated Fiber Quantification==
git clone https://github.com/jyeatman/AFQ.git
==Anaconda==
Each user should have anaconda set up to manage their Python packages. See instructions here:
https://www.continuum.io/downloads
==nibabel==
Python based toolbox for dealing with nifti images. While nibabel is on github we suggest installing using pip:
pip install nibabel
==Neurodebian repo==
The neurodebian project is an incredible resource making it easy to install most of the widely used nueroimaging software packages.
wget -O- http://neuro.debian.net/lists/trusty.us-nh.full | sudo tee /etc/apt/sources.list.d/neurodebian.sources.list
sudo apt-key adv --recv-keys --keyserver hkp://pgp.mit.edu:80 0xA5D32F012649A5A9
sudo apt-get update
==FSL, ANTS and other Neurodebian packages==
Once you have added the neurodebian repos you can easily install fsl and other packages
sudo apt-get install fsl-5.0-complete
sudo apt-get install fsl-5.0-eddy-nonfree
sudo apt-get install ants
miwsoj63a8vdpvnouczcjjyuifx28bw
257
256
2016-03-21T16:14:56Z
Jyeatman
1
wikitext
text/x-wiki
We rely on a number of different software packages to analyze MRI and MEG data. Here is a growing tally of what we use and where to get it. Most of these tools reside on github and you should definitely use git to clone the repositories rather than download a snapshot of the code.
==Setting up a user account==
sudo adduser userid
==Yeatman Lab Tools==
git clone https://github.com/yeatmanlab/BrainTools.git
==Vistasoft==
MATLAB based toolbox, from Brian Wandell's lab at Stanford, that contains many functions we rely on for analyzing diffusion MRI and functional MRI data
git clone https://github.com/vistalab/vistasoft.git
==Automated Fiber Quantification==
git clone https://github.com/jyeatman/AFQ.git
==Anaconda==
Each user should have anaconda set up to manage their Python packages. See instructions here:
https://www.continuum.io/downloads
==nibabel==
Python based toolbox for dealing with nifti images. While nibabel is on github we suggest installing using pip:
pip install nibabel
==Neurodebian repo==
The neurodebian project is an incredible resource making it easy to install most of the widely used nueroimaging software packages.
wget -O- http://neuro.debian.net/lists/trusty.us-nh.full | sudo tee /etc/apt/sources.list.d/neurodebian.sources.list
sudo apt-key adv --recv-keys --keyserver hkp://pgp.mit.edu:80 0xA5D32F012649A5A9
sudo apt-get update
==FSL, ANTS and other Neurodebian packages==
Once you have added the neurodebian repos you can easily install fsl and other packages
sudo apt-get install fsl-5.0-complete
sudo apt-get install fsl-5.0-eddy-nonfree
sudo apt-get install ants
6eesk5cbtmq59upu7up7q5lswtep4kh
258
257
2016-03-21T22:53:58Z
Jyeatman
1
wikitext
text/x-wiki
We rely on a number of different software packages to analyze MRI and MEG data. Here is a growing tally of what we use and where to get it. Most of these tools reside on github and you should definitely use git to clone the repositories rather than download a snapshot of the code.
==Setting up a user account==
sudo adduser userid
==Yeatman Lab Tools==
git clone https://github.com/yeatmanlab/BrainTools.git
==Vistasoft==
MATLAB based toolbox, from Brian Wandell's lab at Stanford, that contains many functions we rely on for analyzing diffusion MRI and functional MRI data
git clone https://github.com/vistalab/vistasoft.git
==Automated Fiber Quantification==
git clone https://github.com/jyeatman/AFQ.git
==Anaconda==
Each user should have anaconda set up to manage their Python packages. See instructions here:
https://www.continuum.io/downloads
==nibabel and DIPY==
Python based toolbox for dealing with nifti images. While nibabel is on github we suggest installing using pip:
pip install nibabel
pip install dipy
==Neurodebian repo==
The neurodebian project is an incredible resource making it easy to install most of the widely used nueroimaging software packages.
wget -O- http://neuro.debian.net/lists/trusty.us-nh.full | sudo tee /etc/apt/sources.list.d/neurodebian.sources.list
sudo apt-key adv --recv-keys --keyserver hkp://pgp.mit.edu:80 0xA5D32F012649A5A9
sudo apt-get update
==FSL, ANTS and other Neurodebian packages==
Once you have added the neurodebian repos you can easily install fsl and other packages
sudo apt-get install fsl-5.0-complete
sudo apt-get install fsl-5.0-eddy-nonfree
sudo apt-get install ants
6vlumdmt0944j6ila11i0b1vtzyjxu8
259
258
2016-03-21T23:47:42Z
Jyeatman
1
/* Neurodebian repo */
wikitext
text/x-wiki
We rely on a number of different software packages to analyze MRI and MEG data. Here is a growing tally of what we use and where to get it. Most of these tools reside on github and you should definitely use git to clone the repositories rather than download a snapshot of the code.
==Setting up a user account==
sudo adduser userid
==Yeatman Lab Tools==
git clone https://github.com/yeatmanlab/BrainTools.git
==Vistasoft==
MATLAB based toolbox, from Brian Wandell's lab at Stanford, that contains many functions we rely on for analyzing diffusion MRI and functional MRI data
git clone https://github.com/vistalab/vistasoft.git
==Automated Fiber Quantification==
git clone https://github.com/jyeatman/AFQ.git
==Anaconda==
Each user should have anaconda set up to manage their Python packages. See instructions here:
https://www.continuum.io/downloads
==nibabel and DIPY==
Python based toolbox for dealing with nifti images. While nibabel is on github we suggest installing using pip:
pip install nibabel
pip install dipy
==Neurodebian repo==
The [http://neuro.debian.net/ neurodebian project] is an incredible resource making it easy to install most of the widely used nueroimaging software packages.
wget -O- http://neuro.debian.net/lists/trusty.us-nh.full | sudo tee /etc/apt/sources.list.d/neurodebian.sources.list
sudo apt-key adv --recv-keys --keyserver hkp://pgp.mit.edu:80 0xA5D32F012649A5A9
sudo apt-get update
==FSL, ANTS and other Neurodebian packages==
Once you have added the neurodebian repos you can easily install fsl and other packages
sudo apt-get install fsl-5.0-complete
sudo apt-get install fsl-5.0-eddy-nonfree
sudo apt-get install ants
4e3lbpqzbts927u668vkgl6u7udcbr5
260
259
2016-03-21T23:58:08Z
Dstrodtman
5
Added psychtoolbox install
wikitext
text/x-wiki
We rely on a number of different software packages to analyze MRI and MEG data. Here is a growing tally of what we use and where to get it. Most of these tools reside on github and you should definitely use git to clone the repositories rather than download a snapshot of the code.
==Setting up a user account==
sudo adduser userid
==Yeatman Lab Tools==
git clone https://github.com/yeatmanlab/BrainTools.git
==Vistasoft==
MATLAB based toolbox, from Brian Wandell's lab at Stanford, that contains many functions we rely on for analyzing diffusion MRI and functional MRI data
git clone https://github.com/vistalab/vistasoft.git
==Automated Fiber Quantification==
git clone https://github.com/jyeatman/AFQ.git
==Anaconda==
Each user should have anaconda set up to manage their Python packages. See instructions here:
https://www.continuum.io/downloads
==nibabel and DIPY==
Python based toolbox for dealing with nifti images. While nibabel is on github we suggest installing using pip:
pip install nibabel
pip install dipy
==Neurodebian repo==
The [http://neuro.debian.net/ neurodebian project] is an incredible resource making it easy to install most of the widely used nueroimaging software packages.
wget -O- http://neuro.debian.net/lists/trusty.us-nh.full | sudo tee /etc/apt/sources.list.d/neurodebian.sources.list
sudo apt-key adv --recv-keys --keyserver hkp://pgp.mit.edu:80 0xA5D32F012649A5A9
sudo apt-get update
==FSL, ANTS and other Neurodebian packages==
Once you have added the neurodebian repos you can easily install fsl and other packages
sudo apt-get install fsl-5.0-complete
sudo apt-get install fsl-5.0-eddy-nonfree
sudo apt-get install ants
==Psychtoolbox for Matlab==
Requires Neurodebian repo to install.
sudo apt-get install matlab-psychtoolbox-3
gdw3um44kue89gv886fncsfisb8i61f
261
260
2016-03-23T23:22:23Z
Dstrodtman
5
added install instructions for FSL
wikitext
text/x-wiki
We rely on a number of different software packages to analyze MRI and MEG data. Here is a growing tally of what we use and where to get it. Most of these tools reside on github and you should definitely use git to clone the repositories rather than download a snapshot of the code.
==Setting up a user account==
sudo adduser userid
==Yeatman Lab Tools==
git clone https://github.com/yeatmanlab/BrainTools.git
==Vistasoft==
MATLAB based toolbox, from Brian Wandell's lab at Stanford, that contains many functions we rely on for analyzing diffusion MRI and functional MRI data
git clone https://github.com/vistalab/vistasoft.git
==Automated Fiber Quantification==
git clone https://github.com/jyeatman/AFQ.git
==Anaconda==
Each user should have anaconda set up to manage their Python packages. See instructions here:
https://www.continuum.io/downloads
==nibabel and DIPY==
Python based toolbox for dealing with nifti images. While nibabel is on github we suggest installing using pip:
pip install nibabel
pip install dipy
==Neurodebian repo==
The [http://neuro.debian.net/ neurodebian project] is an incredible resource making it easy to install most of the widely used nueroimaging software packages.
wget -O- http://neuro.debian.net/lists/trusty.us-nh.full | sudo tee /etc/apt/sources.list.d/neurodebian.sources.list
sudo apt-key adv --recv-keys --keyserver hkp://pgp.mit.edu:80 0xA5D32F012649A5A9
sudo apt-get update
==FSL, ANTS and other Neurodebian packages==
Once you have added the neurodebian repos you can easily install fsl and other packages
sudo apt-get install fsl-5.0-complete
sudo apt-get install fsl-5.0-eddy-nonfree
sudo apt-get install ants
FSL requires an edit to the .bashrc file
gedit .bashrc
Copy the following into the file that open
# FSL setup
. /etc/fsl/5.0/fsl.sh
==Psychtoolbox for Matlab==
Requires Neurodebian repo to install.
sudo apt-get install matlab-psychtoolbox-3
098jx59gt1z01w2ttnmlf86mjd52gg8
284
261
2016-06-08T20:05:55Z
Dstrodtman
5
/* Anaconda */
wikitext
text/x-wiki
We rely on a number of different software packages to analyze MRI and MEG data. Here is a growing tally of what we use and where to get it. Most of these tools reside on github and you should definitely use git to clone the repositories rather than download a snapshot of the code.
==Setting up a user account==
sudo adduser userid
==Yeatman Lab Tools==
git clone https://github.com/yeatmanlab/BrainTools.git
==Vistasoft==
MATLAB based toolbox, from Brian Wandell's lab at Stanford, that contains many functions we rely on for analyzing diffusion MRI and functional MRI data
git clone https://github.com/vistalab/vistasoft.git
==Automated Fiber Quantification==
git clone https://github.com/jyeatman/AFQ.git
==Anaconda==
Each user should have anaconda set up to manage their Python packages. Anaconda will manage versions and dependencies for each user separately. Each install will be specific to the user that installs it. Do not install as root, as you will deny yourself privileges to your own directory. You must restart your terminal session after install before using Python or the conda command. See instructions here:
https://www.continuum.io/downloads
==nibabel and DIPY==
Python based toolbox for dealing with nifti images. While nibabel is on github we suggest installing using pip:
pip install nibabel
pip install dipy
==Neurodebian repo==
The [http://neuro.debian.net/ neurodebian project] is an incredible resource making it easy to install most of the widely used nueroimaging software packages.
wget -O- http://neuro.debian.net/lists/trusty.us-nh.full | sudo tee /etc/apt/sources.list.d/neurodebian.sources.list
sudo apt-key adv --recv-keys --keyserver hkp://pgp.mit.edu:80 0xA5D32F012649A5A9
sudo apt-get update
==FSL, ANTS and other Neurodebian packages==
Once you have added the neurodebian repos you can easily install fsl and other packages
sudo apt-get install fsl-5.0-complete
sudo apt-get install fsl-5.0-eddy-nonfree
sudo apt-get install ants
FSL requires an edit to the .bashrc file
gedit .bashrc
Copy the following into the file that open
# FSL setup
. /etc/fsl/5.0/fsl.sh
==Psychtoolbox for Matlab==
Requires Neurodebian repo to install.
sudo apt-get install matlab-psychtoolbox-3
godc7m406syv8b4y357gpr664kt3gbb
285
284
2016-06-10T17:08:30Z
Dstrodtman
5
/* FSL, ANTS and other Neurodebian packages */
wikitext
text/x-wiki
We rely on a number of different software packages to analyze MRI and MEG data. Here is a growing tally of what we use and where to get it. Most of these tools reside on github and you should definitely use git to clone the repositories rather than download a snapshot of the code.
==Setting up a user account==
sudo adduser userid
==Yeatman Lab Tools==
git clone https://github.com/yeatmanlab/BrainTools.git
==Vistasoft==
MATLAB based toolbox, from Brian Wandell's lab at Stanford, that contains many functions we rely on for analyzing diffusion MRI and functional MRI data
git clone https://github.com/vistalab/vistasoft.git
==Automated Fiber Quantification==
git clone https://github.com/jyeatman/AFQ.git
==Anaconda==
Each user should have anaconda set up to manage their Python packages. Anaconda will manage versions and dependencies for each user separately. Each install will be specific to the user that installs it. Do not install as root, as you will deny yourself privileges to your own directory. You must restart your terminal session after install before using Python or the conda command. See instructions here:
https://www.continuum.io/downloads
==nibabel and DIPY==
Python based toolbox for dealing with nifti images. While nibabel is on github we suggest installing using pip:
pip install nibabel
pip install dipy
==Neurodebian repo==
The [http://neuro.debian.net/ neurodebian project] is an incredible resource making it easy to install most of the widely used nueroimaging software packages.
wget -O- http://neuro.debian.net/lists/trusty.us-nh.full | sudo tee /etc/apt/sources.list.d/neurodebian.sources.list
sudo apt-key adv --recv-keys --keyserver hkp://pgp.mit.edu:80 0xA5D32F012649A5A9
sudo apt-get update
==FSL, ANTS and other Neurodebian packages==
Once you have added the neurodebian repos you can easily install fsl and other packages
sudo apt-get install fsl-5.0-complete
sudo apt-get install fsl-5.0-eddy-nonfree
sudo apt-get install ants
FSL requires an edit to the .bashrc file
gedit ~/.bashrc
Copy the following into the file that open
# FSL setup
. /etc/fsl/5.0/fsl.sh
==Psychtoolbox for Matlab==
Requires Neurodebian repo to install.
sudo apt-get install matlab-psychtoolbox-3
d68dhr4pbddxvqsf2vtws9itf2i6dvh
286
285
2016-06-10T17:15:31Z
Dstrodtman
5
/* FSL, ANTS and other Neurodebian packages */
wikitext
text/x-wiki
We rely on a number of different software packages to analyze MRI and MEG data. Here is a growing tally of what we use and where to get it. Most of these tools reside on github and you should definitely use git to clone the repositories rather than download a snapshot of the code.
==Setting up a user account==
sudo adduser userid
==Yeatman Lab Tools==
git clone https://github.com/yeatmanlab/BrainTools.git
==Vistasoft==
MATLAB based toolbox, from Brian Wandell's lab at Stanford, that contains many functions we rely on for analyzing diffusion MRI and functional MRI data
git clone https://github.com/vistalab/vistasoft.git
==Automated Fiber Quantification==
git clone https://github.com/jyeatman/AFQ.git
==Anaconda==
Each user should have anaconda set up to manage their Python packages. Anaconda will manage versions and dependencies for each user separately. Each install will be specific to the user that installs it. Do not install as root, as you will deny yourself privileges to your own directory. You must restart your terminal session after install before using Python or the conda command. See instructions here:
https://www.continuum.io/downloads
==nibabel and DIPY==
Python based toolbox for dealing with nifti images. While nibabel is on github we suggest installing using pip:
pip install nibabel
pip install dipy
==Neurodebian repo==
The [http://neuro.debian.net/ neurodebian project] is an incredible resource making it easy to install most of the widely used nueroimaging software packages.
wget -O- http://neuro.debian.net/lists/trusty.us-nh.full | sudo tee /etc/apt/sources.list.d/neurodebian.sources.list
sudo apt-key adv --recv-keys --keyserver hkp://pgp.mit.edu:80 0xA5D32F012649A5A9
sudo apt-get update
==FSL, ANTS and other Neurodebian packages==
Once you have added the neurodebian repos you can easily install fsl and other packages
sudo apt-get install fsl-5.0-complete
sudo apt-get install fsl-5.0-eddy-nonfree
sudo apt-get install ants
FSL requires an edit to the .bashrc file
sudo gedit ~/.bashrc
Copy the following into the file that open
# FSL setup
. /etc/fsl/5.0/fsl.sh
==Psychtoolbox for Matlab==
Requires Neurodebian repo to install.
sudo apt-get install matlab-psychtoolbox-3
lc44wkg527nk6h08dlps8hveci2ifa5
287
286
2016-06-10T17:36:21Z
Dstrodtman
5
/* FSL, ANTS and other Neurodebian packages */
wikitext
text/x-wiki
We rely on a number of different software packages to analyze MRI and MEG data. Here is a growing tally of what we use and where to get it. Most of these tools reside on github and you should definitely use git to clone the repositories rather than download a snapshot of the code.
==Setting up a user account==
sudo adduser userid
==Yeatman Lab Tools==
git clone https://github.com/yeatmanlab/BrainTools.git
==Vistasoft==
MATLAB based toolbox, from Brian Wandell's lab at Stanford, that contains many functions we rely on for analyzing diffusion MRI and functional MRI data
git clone https://github.com/vistalab/vistasoft.git
==Automated Fiber Quantification==
git clone https://github.com/jyeatman/AFQ.git
==Anaconda==
Each user should have anaconda set up to manage their Python packages. Anaconda will manage versions and dependencies for each user separately. Each install will be specific to the user that installs it. Do not install as root, as you will deny yourself privileges to your own directory. You must restart your terminal session after install before using Python or the conda command. See instructions here:
https://www.continuum.io/downloads
==nibabel and DIPY==
Python based toolbox for dealing with nifti images. While nibabel is on github we suggest installing using pip:
pip install nibabel
pip install dipy
==Neurodebian repo==
The [http://neuro.debian.net/ neurodebian project] is an incredible resource making it easy to install most of the widely used nueroimaging software packages.
wget -O- http://neuro.debian.net/lists/trusty.us-nh.full | sudo tee /etc/apt/sources.list.d/neurodebian.sources.list
sudo apt-key adv --recv-keys --keyserver hkp://pgp.mit.edu:80 0xA5D32F012649A5A9
sudo apt-get update
==FSL, ANTS and other Neurodebian packages==
Once you have added the neurodebian repos you can easily install fsl and other packages
sudo apt-get install fsl-5.0-complete
sudo apt-get install fsl-5.0-eddy-nonfree
sudo apt-get install ants
FSL requires an edit to the .bashrc file
gedit ~/.bashrc
Copy the following into the file that open
# FSL setup
. /etc/fsl/5.0/fsl.sh
==Psychtoolbox for Matlab==
Requires Neurodebian repo to install.
sudo apt-get install matlab-psychtoolbox-3
d68dhr4pbddxvqsf2vtws9itf2i6dvh
288
287
2016-06-10T17:39:10Z
Dstrodtman
5
/* Psychtoolbox for Matlab */
wikitext
text/x-wiki
We rely on a number of different software packages to analyze MRI and MEG data. Here is a growing tally of what we use and where to get it. Most of these tools reside on github and you should definitely use git to clone the repositories rather than download a snapshot of the code.
==Setting up a user account==
sudo adduser userid
==Yeatman Lab Tools==
git clone https://github.com/yeatmanlab/BrainTools.git
==Vistasoft==
MATLAB based toolbox, from Brian Wandell's lab at Stanford, that contains many functions we rely on for analyzing diffusion MRI and functional MRI data
git clone https://github.com/vistalab/vistasoft.git
==Automated Fiber Quantification==
git clone https://github.com/jyeatman/AFQ.git
==Anaconda==
Each user should have anaconda set up to manage their Python packages. Anaconda will manage versions and dependencies for each user separately. Each install will be specific to the user that installs it. Do not install as root, as you will deny yourself privileges to your own directory. You must restart your terminal session after install before using Python or the conda command. See instructions here:
https://www.continuum.io/downloads
==nibabel and DIPY==
Python based toolbox for dealing with nifti images. While nibabel is on github we suggest installing using pip:
pip install nibabel
pip install dipy
==Neurodebian repo==
The [http://neuro.debian.net/ neurodebian project] is an incredible resource making it easy to install most of the widely used nueroimaging software packages.
wget -O- http://neuro.debian.net/lists/trusty.us-nh.full | sudo tee /etc/apt/sources.list.d/neurodebian.sources.list
sudo apt-key adv --recv-keys --keyserver hkp://pgp.mit.edu:80 0xA5D32F012649A5A9
sudo apt-get update
==FSL, ANTS and other Neurodebian packages==
Once you have added the neurodebian repos you can easily install fsl and other packages
sudo apt-get install fsl-5.0-complete
sudo apt-get install fsl-5.0-eddy-nonfree
sudo apt-get install ants
FSL requires an edit to the .bashrc file
gedit ~/.bashrc
Copy the following into the file that open
# FSL setup
. /etc/fsl/5.0/fsl.sh
==Psychtoolbox for Matlab==
Requires Neurodebian repo to install.
sudo apt-get install matlab-psychtoolbox-3
In order to launch Matlab with Psychtoolbox, use terminal command
ptb3-matlab
fwd0yvtd5jbl8pbpjbb3jiccdj4fu49
289
288
2016-06-11T00:35:49Z
Dstrodtman
5
wikitext
text/x-wiki
We rely on a number of different software packages to analyze MRI and MEG data. Here is a growing tally of what we use and where to get it. Most of these tools reside on github and you should definitely use git to clone the repositories rather than download a snapshot of the code.
==Setting up a user account==
To add a new user with sudo privileges:
sudo useradd -m -G bde,sudo -s /bin/bash userid
To set user password.
sudo passwd
Have new user enter desired password.
==Git==
All git directories should be maintained together.
mkdir ~/git
==Yeatman Lab Tools==
cd ~/git
git clone https://github.com/yeatmanlab/BrainTools.git
==Vistasoft==
MATLAB based toolbox, from Brian Wandell's lab at Stanford, that contains many functions we rely on for analyzing diffusion MRI and functional MRI data
cd ~/git
git clone https://github.com/vistalab/vistasoft.git
==Automated Fiber Quantification==
cd ~/git
git clone https://github.com/jyeatman/AFQ.git
==Setting up Matlab for AFQ==
AFQ requires Matlab r2014a, SPM8, Vistasoft, and AFQ. If you wish to use a different version of Matlab for your core coding, do not create a /usr/local/bin file during installation. Instead, after installing r2014a to the default location.
cd /usr/local/bin
sudo ln -s /usr/local/MATLAB/R2014a/bin/matlab AFQ-matlab
sudo AFQ-matlab
In the Matlab terminal that opens,
cd /usr/local/MATLAB/R2014a/toolbox/local
edit startup
Paste the following code into the new file.
addpath(genpath('~/git/AFQ/'))
addpath(genpath('~/git/BrainTools/'))
addpath(genpath('~/git/vistasoft/'))
addpath(genpath('~/Documents/MATLAB/spm8/'))
Save the file
If all the requisite dependencies for AFQ have been installed according the instructions on this wiki, you can now execute
AFQ-matlab
from the terminal in order to launch Matlab r2014a with all the necessary dependencies.
==Anaconda==
Each user should have anaconda set up to manage their Python packages. Do this before installing nibabel and dipy. Anaconda will manage versions and dependencies for each user separately. Each install will be specific to the user that installs it. Do not install as root, as you will deny yourself privileges to your own directory. You must restart your terminal session after install before using Python or the conda command. See instructions here:
https://www.continuum.io/downloads
==nibabel and DIPY==
Python based toolbox for dealing with nifti images. While nibabel is on github we suggest installing using pip:
pip install nibabel
pip install dipy
==Neurodebian repo==
The [http://neuro.debian.net/ neurodebian project] is an incredible resource making it easy to install most of the widely used nueroimaging software packages.
wget -O- http://neuro.debian.net/lists/trusty.us-nh.full | sudo tee /etc/apt/sources.list.d/neurodebian.sources.list
sudo apt-key adv --recv-keys --keyserver hkp://pgp.mit.edu:80 0xA5D32F012649A5A9
sudo apt-get update
==FSL, ANTS and other Neurodebian packages==
Once you have added the neurodebian repos you can easily install fsl and other packages
sudo apt-get install fsl-5.0-complete
sudo apt-get install fsl-5.0-eddy-nonfree
sudo apt-get install ants
FSL requires an edit to the .bashrc file
gedit ~/.bashrc
Copy the following into the file that open
# FSL setup
. /etc/fsl/5.0/fsl.sh
==SPM8==
Fill out the form located [http://www.fil.ion.ucl.ac.uk/spm/software/download/ here] in order, making sure to select SPM8. Run this script from the terminal to install.
unzip ~/Downloads/spm8.zip -d ~/Documents/MATLAB/
==Psychtoolbox for Matlab==
Requires Neurodebian repo to install.
sudo apt-get install matlab-psychtoolbox-3
In order to launch Matlab with Psychtoolbox, use terminal command
ptb3-matlab
1p9emonv4d39otxpb191uqg0qov37jz
290
289
2016-06-13T16:07:57Z
Dstrodtman
5
/* Setting up a user account */
wikitext
text/x-wiki
We rely on a number of different software packages to analyze MRI and MEG data. Here is a growing tally of what we use and where to get it. Most of these tools reside on github and you should definitely use git to clone the repositories rather than download a snapshot of the code.
==Setting up a user account==
To add a new user with sudo privileges (userid should be maintained across all systems).
sudo useradd -m -G bde,sudo -s /bin/bash userid
To set user password.
sudo passwd userid
Have new user enter desired password.
==Git==
All git directories should be maintained together.
mkdir ~/git
==Yeatman Lab Tools==
cd ~/git
git clone https://github.com/yeatmanlab/BrainTools.git
==Vistasoft==
MATLAB based toolbox, from Brian Wandell's lab at Stanford, that contains many functions we rely on for analyzing diffusion MRI and functional MRI data
cd ~/git
git clone https://github.com/vistalab/vistasoft.git
==Automated Fiber Quantification==
cd ~/git
git clone https://github.com/jyeatman/AFQ.git
==Setting up Matlab for AFQ==
AFQ requires Matlab r2014a, SPM8, Vistasoft, and AFQ. If you wish to use a different version of Matlab for your core coding, do not create a /usr/local/bin file during installation. Instead, after installing r2014a to the default location.
cd /usr/local/bin
sudo ln -s /usr/local/MATLAB/R2014a/bin/matlab AFQ-matlab
sudo AFQ-matlab
In the Matlab terminal that opens,
cd /usr/local/MATLAB/R2014a/toolbox/local
edit startup
Paste the following code into the new file.
addpath(genpath('~/git/AFQ/'))
addpath(genpath('~/git/BrainTools/'))
addpath(genpath('~/git/vistasoft/'))
addpath(genpath('~/Documents/MATLAB/spm8/'))
Save the file
If all the requisite dependencies for AFQ have been installed according the instructions on this wiki, you can now execute
AFQ-matlab
from the terminal in order to launch Matlab r2014a with all the necessary dependencies.
==Anaconda==
Each user should have anaconda set up to manage their Python packages. Do this before installing nibabel and dipy. Anaconda will manage versions and dependencies for each user separately. Each install will be specific to the user that installs it. Do not install as root, as you will deny yourself privileges to your own directory. You must restart your terminal session after install before using Python or the conda command. See instructions here:
https://www.continuum.io/downloads
==nibabel and DIPY==
Python based toolbox for dealing with nifti images. While nibabel is on github we suggest installing using pip:
pip install nibabel
pip install dipy
==Neurodebian repo==
The [http://neuro.debian.net/ neurodebian project] is an incredible resource making it easy to install most of the widely used nueroimaging software packages.
wget -O- http://neuro.debian.net/lists/trusty.us-nh.full | sudo tee /etc/apt/sources.list.d/neurodebian.sources.list
sudo apt-key adv --recv-keys --keyserver hkp://pgp.mit.edu:80 0xA5D32F012649A5A9
sudo apt-get update
==FSL, ANTS and other Neurodebian packages==
Once you have added the neurodebian repos you can easily install fsl and other packages
sudo apt-get install fsl-5.0-complete
sudo apt-get install fsl-5.0-eddy-nonfree
sudo apt-get install ants
FSL requires an edit to the .bashrc file
gedit ~/.bashrc
Copy the following into the file that open
# FSL setup
. /etc/fsl/5.0/fsl.sh
==SPM8==
Fill out the form located [http://www.fil.ion.ucl.ac.uk/spm/software/download/ here] in order, making sure to select SPM8. Run this script from the terminal to install.
unzip ~/Downloads/spm8.zip -d ~/Documents/MATLAB/
==Psychtoolbox for Matlab==
Requires Neurodebian repo to install.
sudo apt-get install matlab-psychtoolbox-3
In order to launch Matlab with Psychtoolbox, use terminal command
ptb3-matlab
sps6zxn4y7ghituqcgpd7vtm3thqm55
295
290
2016-06-14T20:37:55Z
Dstrodtman
5
wikitext
text/x-wiki
We rely on a number of different software packages to analyze MRI and MEG data. Here is a growing tally of what we use and where to get it. Most of these tools reside on github and you should definitely use git to clone the repositories rather than download a snapshot of the code.
==Setting up a user account==
To add a new user with sudo privileges (userid should be maintained across all systems).
sudo useradd -m -G bde,sudo -s /bin/bash userid
To set user password.
sudo passwd userid
Have new user enter desired password.
==Git==
All git directories should be maintained together.
mkdir ~/git
==Yeatman Lab Tools==
cd ~/git
git clone https://github.com/yeatmanlab/BrainTools.git
==Vistasoft==
MATLAB based toolbox, from Brian Wandell's lab at Stanford, that contains many functions we rely on for analyzing diffusion MRI and functional MRI data
cd ~/git
git clone https://github.com/vistalab/vistasoft.git
==Automated Fiber Quantification==
cd ~/git
git clone https://github.com/jyeatman/AFQ.git
==Setting up Matlab for AFQ==
AFQ requires Matlab r2014a, SPM8, Vistasoft, and AFQ. If you wish to use a different version of Matlab for your core coding, do not create a /usr/local/bin file during installation. Instead, after installing r2014a to the default location.
cd /usr/local/bin
sudo ln -s /usr/local/MATLAB/R2014a/bin/matlab AFQ-matlab
sudo AFQ-matlab
In the Matlab terminal that opens,
cd /usr/local/MATLAB/R2014a/toolbox/local
edit startup
Paste the following code into the new file.
addpath(genpath('~/git/AFQ/'))
addpath(genpath('~/git/BrainTools/'))
addpath(genpath('~/git/vistasoft/'))
addpath(genpath('~/Documents/MATLAB/spm8/'))
Save the file
If all the requisite dependencies for AFQ have been installed according the instructions on this wiki, you can now execute
AFQ-matlab
from the terminal in order to launch Matlab r2014a with all the necessary dependencies.
==Anaconda==
Each user should have anaconda set up to manage their Python packages. Do this before installing nibabel and dipy. Anaconda will manage versions and dependencies for each user separately. Each install will be specific to the user that installs it. Do not install as root, as you will deny yourself privileges to your own directory. You must restart your terminal session after install before using Python or the conda command. See instructions here:
https://www.continuum.io/downloads
==nibabel and DIPY==
Python based toolbox for dealing with nifti images. While nibabel is on github we suggest installing using pip:
pip install nibabel
pip install dipy
==pyAFQ==
While we are still in the processing of developing a full implementation of AFQ in Python, some of these files are required for running DKI.
cd ~/git
git clone https://github.com/yeatmanlab/pyAFQ
cd pyAFQ
python setup.py develop
==Neurodebian repo==
The [http://neuro.debian.net/ neurodebian project] is an incredible resource making it easy to install most of the widely used nueroimaging software packages.
wget -O- http://neuro.debian.net/lists/trusty.us-nh.full | sudo tee /etc/apt/sources.list.d/neurodebian.sources.list
sudo apt-key adv --recv-keys --keyserver hkp://pgp.mit.edu:80 0xA5D32F012649A5A9
sudo apt-get update
==FSL, ANTS and other Neurodebian packages==
Once you have added the neurodebian repos you can easily install fsl and other packages
sudo apt-get install fsl-5.0-complete
sudo apt-get install fsl-5.0-eddy-nonfree
sudo apt-get install ants
FSL requires an edit to the .bashrc file
gedit ~/.bashrc
Copy the following into the file that open
# FSL setup
. /etc/fsl/5.0/fsl.sh
==SPM8==
Fill out the form located [http://www.fil.ion.ucl.ac.uk/spm/software/download/ here] in order, making sure to select SPM8. Run this script from the terminal to install.
unzip ~/Downloads/spm8.zip -d ~/Documents/MATLAB/
==Psychtoolbox for Matlab==
Requires Neurodebian repo to install.
sudo apt-get install matlab-psychtoolbox-3
In order to launch Matlab with Psychtoolbox, use terminal command
ptb3-matlab
1oox419r7auyc2m8pnpna3jd0gtziam
296
295
2016-06-14T20:43:08Z
Dstrodtman
5
/* pyAFQ */
wikitext
text/x-wiki
We rely on a number of different software packages to analyze MRI and MEG data. Here is a growing tally of what we use and where to get it. Most of these tools reside on github and you should definitely use git to clone the repositories rather than download a snapshot of the code.
==Setting up a user account==
To add a new user with sudo privileges (userid should be maintained across all systems).
sudo useradd -m -G bde,sudo -s /bin/bash userid
To set user password.
sudo passwd userid
Have new user enter desired password.
==Git==
All git directories should be maintained together.
mkdir ~/git
==Yeatman Lab Tools==
cd ~/git
git clone https://github.com/yeatmanlab/BrainTools.git
==Vistasoft==
MATLAB based toolbox, from Brian Wandell's lab at Stanford, that contains many functions we rely on for analyzing diffusion MRI and functional MRI data
cd ~/git
git clone https://github.com/vistalab/vistasoft.git
==Automated Fiber Quantification==
cd ~/git
git clone https://github.com/jyeatman/AFQ.git
==Setting up Matlab for AFQ==
AFQ requires Matlab r2014a, SPM8, Vistasoft, and AFQ. If you wish to use a different version of Matlab for your core coding, do not create a /usr/local/bin file during installation. Instead, after installing r2014a to the default location.
cd /usr/local/bin
sudo ln -s /usr/local/MATLAB/R2014a/bin/matlab AFQ-matlab
sudo AFQ-matlab
In the Matlab terminal that opens,
cd /usr/local/MATLAB/R2014a/toolbox/local
edit startup
Paste the following code into the new file.
addpath(genpath('~/git/AFQ/'))
addpath(genpath('~/git/BrainTools/'))
addpath(genpath('~/git/vistasoft/'))
addpath(genpath('~/Documents/MATLAB/spm8/'))
Save the file
If all the requisite dependencies for AFQ have been installed according the instructions on this wiki, you can now execute
AFQ-matlab
from the terminal in order to launch Matlab r2014a with all the necessary dependencies.
==Anaconda==
Each user should have anaconda set up to manage their Python packages. Do this before installing nibabel and dipy. Anaconda will manage versions and dependencies for each user separately. Each install will be specific to the user that installs it. Do not install as root, as you will deny yourself privileges to your own directory. You must restart your terminal session after install before using Python or the conda command. See instructions here:
https://www.continuum.io/downloads
==nibabel and DIPY==
Python based toolbox for dealing with nifti images. While nibabel is on github we suggest installing using pip:
pip install nibabel
pip install dipy
==pyAFQ==
While we are still in the processing of developing a full implementation of AFQ in Python, some of these files are required for running DKI.
cd ~/git
git clone https://github.com/yeatmanlab/pyAFQ
cd pyAFQ
python setup.py develop
gedit ~/.bashrc
Copy the following code (replacing userid) into the file that opens and save.
# Add local bin directory to path
export PATH="/home/userid/.local/bin/:$PATH"
==Neurodebian repo==
The [http://neuro.debian.net/ neurodebian project] is an incredible resource making it easy to install most of the widely used nueroimaging software packages.
wget -O- http://neuro.debian.net/lists/trusty.us-nh.full | sudo tee /etc/apt/sources.list.d/neurodebian.sources.list
sudo apt-key adv --recv-keys --keyserver hkp://pgp.mit.edu:80 0xA5D32F012649A5A9
sudo apt-get update
==FSL, ANTS and other Neurodebian packages==
Once you have added the neurodebian repos you can easily install fsl and other packages
sudo apt-get install fsl-5.0-complete
sudo apt-get install fsl-5.0-eddy-nonfree
sudo apt-get install ants
FSL requires an edit to the .bashrc file
gedit ~/.bashrc
Copy the following into the file that open
# FSL setup
. /etc/fsl/5.0/fsl.sh
==SPM8==
Fill out the form located [http://www.fil.ion.ucl.ac.uk/spm/software/download/ here] in order, making sure to select SPM8. Run this script from the terminal to install.
unzip ~/Downloads/spm8.zip -d ~/Documents/MATLAB/
==Psychtoolbox for Matlab==
Requires Neurodebian repo to install.
sudo apt-get install matlab-psychtoolbox-3
In order to launch Matlab with Psychtoolbox, use terminal command
ptb3-matlab
rtsns9s655plgyu6n679db3ftblvjtp
297
296
2016-06-14T21:23:49Z
Dstrodtman
5
/* pyAFQ */
wikitext
text/x-wiki
We rely on a number of different software packages to analyze MRI and MEG data. Here is a growing tally of what we use and where to get it. Most of these tools reside on github and you should definitely use git to clone the repositories rather than download a snapshot of the code.
==Setting up a user account==
To add a new user with sudo privileges (userid should be maintained across all systems).
sudo useradd -m -G bde,sudo -s /bin/bash userid
To set user password.
sudo passwd userid
Have new user enter desired password.
==Git==
All git directories should be maintained together.
mkdir ~/git
==Yeatman Lab Tools==
cd ~/git
git clone https://github.com/yeatmanlab/BrainTools.git
==Vistasoft==
MATLAB based toolbox, from Brian Wandell's lab at Stanford, that contains many functions we rely on for analyzing diffusion MRI and functional MRI data
cd ~/git
git clone https://github.com/vistalab/vistasoft.git
==Automated Fiber Quantification==
cd ~/git
git clone https://github.com/jyeatman/AFQ.git
==Setting up Matlab for AFQ==
AFQ requires Matlab r2014a, SPM8, Vistasoft, and AFQ. If you wish to use a different version of Matlab for your core coding, do not create a /usr/local/bin file during installation. Instead, after installing r2014a to the default location.
cd /usr/local/bin
sudo ln -s /usr/local/MATLAB/R2014a/bin/matlab AFQ-matlab
sudo AFQ-matlab
In the Matlab terminal that opens,
cd /usr/local/MATLAB/R2014a/toolbox/local
edit startup
Paste the following code into the new file.
addpath(genpath('~/git/AFQ/'))
addpath(genpath('~/git/BrainTools/'))
addpath(genpath('~/git/vistasoft/'))
addpath(genpath('~/Documents/MATLAB/spm8/'))
Save the file
If all the requisite dependencies for AFQ have been installed according the instructions on this wiki, you can now execute
AFQ-matlab
from the terminal in order to launch Matlab r2014a with all the necessary dependencies.
==Anaconda==
Each user should have anaconda set up to manage their Python packages. Do this before installing nibabel and dipy. Anaconda will manage versions and dependencies for each user separately. Each install will be specific to the user that installs it. Do not install as root, as you will deny yourself privileges to your own directory. You must restart your terminal session after install before using Python or the conda command. See instructions here:
https://www.continuum.io/downloads
==nibabel and DIPY==
Python based toolbox for dealing with nifti images. While nibabel is on github we suggest installing using pip:
pip install nibabel
pip install dipy
==pyAFQ==
While we are still in the processing of developing a full implementation of AFQ in Python, some of these files are required for running DKI.
cd ~/git
git clone https://github.com/yeatmanlab/pyAFQ
cd pyAFQ
python setup.py develop
pip install boto3
gedit ~/.bashrc
Copy the following code (replacing userid) into the file that opens and save.
# Add local bin directory to path
export PATH="/home/userid/.local/bin/:$PATH"
==Neurodebian repo==
The [http://neuro.debian.net/ neurodebian project] is an incredible resource making it easy to install most of the widely used nueroimaging software packages.
wget -O- http://neuro.debian.net/lists/trusty.us-nh.full | sudo tee /etc/apt/sources.list.d/neurodebian.sources.list
sudo apt-key adv --recv-keys --keyserver hkp://pgp.mit.edu:80 0xA5D32F012649A5A9
sudo apt-get update
==FSL, ANTS and other Neurodebian packages==
Once you have added the neurodebian repos you can easily install fsl and other packages
sudo apt-get install fsl-5.0-complete
sudo apt-get install fsl-5.0-eddy-nonfree
sudo apt-get install ants
FSL requires an edit to the .bashrc file
gedit ~/.bashrc
Copy the following into the file that open
# FSL setup
. /etc/fsl/5.0/fsl.sh
==SPM8==
Fill out the form located [http://www.fil.ion.ucl.ac.uk/spm/software/download/ here] in order, making sure to select SPM8. Run this script from the terminal to install.
unzip ~/Downloads/spm8.zip -d ~/Documents/MATLAB/
==Psychtoolbox for Matlab==
Requires Neurodebian repo to install.
sudo apt-get install matlab-psychtoolbox-3
In order to launch Matlab with Psychtoolbox, use terminal command
ptb3-matlab
7lrwit6kuye0foi31dfffqgy6txmji5
305
297
2016-07-05T16:10:58Z
Dstrodtman
5
/* Setting up Matlab for AFQ */
wikitext
text/x-wiki
We rely on a number of different software packages to analyze MRI and MEG data. Here is a growing tally of what we use and where to get it. Most of these tools reside on github and you should definitely use git to clone the repositories rather than download a snapshot of the code.
==Setting up a user account==
To add a new user with sudo privileges (userid should be maintained across all systems).
sudo useradd -m -G bde,sudo -s /bin/bash userid
To set user password.
sudo passwd userid
Have new user enter desired password.
==Git==
All git directories should be maintained together.
mkdir ~/git
==Yeatman Lab Tools==
cd ~/git
git clone https://github.com/yeatmanlab/BrainTools.git
==Vistasoft==
MATLAB based toolbox, from Brian Wandell's lab at Stanford, that contains many functions we rely on for analyzing diffusion MRI and functional MRI data
cd ~/git
git clone https://github.com/vistalab/vistasoft.git
==Automated Fiber Quantification==
cd ~/git
git clone https://github.com/jyeatman/AFQ.git
==Setting up Matlab for AFQ==
AFQ requires Matlab r2014a, SPM8, Vistasoft, and AFQ. If you wish to use a different version of Matlab for your core coding, do not create a /usr/local/bin file during installation. Instead, after installing r2014a to the default location.
cd /usr/local/bin
sudo ln -s /usr/local/MATLAB/R2014a/bin/matlab AFQ-matlab
sudo AFQ-matlab
In the Matlab terminal that opens,
cd /usr/local/MATLAB/R2014a/toolbox/local
edit startup
Paste the following code into the new file.
addpath(genpath('~/Documents/MATLAB/spm8/'))
addpath(genpath('~/git/AFQ/'))
addpath(genpath('~/git/BrainTools/'))
addpath(genpath('~/git/vistasoft/'))
Save the file
If all the requisite dependencies for AFQ have been installed according the instructions on this wiki, you can now execute
AFQ-matlab
from the terminal in order to launch Matlab r2014a with all the necessary dependencies.
==Anaconda==
Each user should have anaconda set up to manage their Python packages. Do this before installing nibabel and dipy. Anaconda will manage versions and dependencies for each user separately. Each install will be specific to the user that installs it. Do not install as root, as you will deny yourself privileges to your own directory. You must restart your terminal session after install before using Python or the conda command. See instructions here:
https://www.continuum.io/downloads
==nibabel and DIPY==
Python based toolbox for dealing with nifti images. While nibabel is on github we suggest installing using pip:
pip install nibabel
pip install dipy
==pyAFQ==
While we are still in the processing of developing a full implementation of AFQ in Python, some of these files are required for running DKI.
cd ~/git
git clone https://github.com/yeatmanlab/pyAFQ
cd pyAFQ
python setup.py develop
pip install boto3
gedit ~/.bashrc
Copy the following code (replacing userid) into the file that opens and save.
# Add local bin directory to path
export PATH="/home/userid/.local/bin/:$PATH"
==Neurodebian repo==
The [http://neuro.debian.net/ neurodebian project] is an incredible resource making it easy to install most of the widely used nueroimaging software packages.
wget -O- http://neuro.debian.net/lists/trusty.us-nh.full | sudo tee /etc/apt/sources.list.d/neurodebian.sources.list
sudo apt-key adv --recv-keys --keyserver hkp://pgp.mit.edu:80 0xA5D32F012649A5A9
sudo apt-get update
==FSL, ANTS and other Neurodebian packages==
Once you have added the neurodebian repos you can easily install fsl and other packages
sudo apt-get install fsl-5.0-complete
sudo apt-get install fsl-5.0-eddy-nonfree
sudo apt-get install ants
FSL requires an edit to the .bashrc file
gedit ~/.bashrc
Copy the following into the file that open
# FSL setup
. /etc/fsl/5.0/fsl.sh
==SPM8==
Fill out the form located [http://www.fil.ion.ucl.ac.uk/spm/software/download/ here] in order, making sure to select SPM8. Run this script from the terminal to install.
unzip ~/Downloads/spm8.zip -d ~/Documents/MATLAB/
==Psychtoolbox for Matlab==
Requires Neurodebian repo to install.
sudo apt-get install matlab-psychtoolbox-3
In order to launch Matlab with Psychtoolbox, use terminal command
ptb3-matlab
a97ekh65uz304upb7se4di8agmo9bvg
306
305
2016-07-06T18:48:53Z
Jyeatman
1
/* pyAFQ */
wikitext
text/x-wiki
We rely on a number of different software packages to analyze MRI and MEG data. Here is a growing tally of what we use and where to get it. Most of these tools reside on github and you should definitely use git to clone the repositories rather than download a snapshot of the code.
==Setting up a user account==
To add a new user with sudo privileges (userid should be maintained across all systems).
sudo useradd -m -G bde,sudo -s /bin/bash userid
To set user password.
sudo passwd userid
Have new user enter desired password.
==Git==
All git directories should be maintained together.
mkdir ~/git
==Yeatman Lab Tools==
cd ~/git
git clone https://github.com/yeatmanlab/BrainTools.git
==Vistasoft==
MATLAB based toolbox, from Brian Wandell's lab at Stanford, that contains many functions we rely on for analyzing diffusion MRI and functional MRI data
cd ~/git
git clone https://github.com/vistalab/vistasoft.git
==Automated Fiber Quantification==
cd ~/git
git clone https://github.com/jyeatman/AFQ.git
==Setting up Matlab for AFQ==
AFQ requires Matlab r2014a, SPM8, Vistasoft, and AFQ. If you wish to use a different version of Matlab for your core coding, do not create a /usr/local/bin file during installation. Instead, after installing r2014a to the default location.
cd /usr/local/bin
sudo ln -s /usr/local/MATLAB/R2014a/bin/matlab AFQ-matlab
sudo AFQ-matlab
In the Matlab terminal that opens,
cd /usr/local/MATLAB/R2014a/toolbox/local
edit startup
Paste the following code into the new file.
addpath(genpath('~/Documents/MATLAB/spm8/'))
addpath(genpath('~/git/AFQ/'))
addpath(genpath('~/git/BrainTools/'))
addpath(genpath('~/git/vistasoft/'))
Save the file
If all the requisite dependencies for AFQ have been installed according the instructions on this wiki, you can now execute
AFQ-matlab
from the terminal in order to launch Matlab r2014a with all the necessary dependencies.
==Anaconda==
Each user should have anaconda set up to manage their Python packages. Do this before installing nibabel and dipy. Anaconda will manage versions and dependencies for each user separately. Each install will be specific to the user that installs it. Do not install as root, as you will deny yourself privileges to your own directory. You must restart your terminal session after install before using Python or the conda command. See instructions here:
https://www.continuum.io/downloads
==nibabel and DIPY==
Python based toolbox for dealing with nifti images. While nibabel is on github we suggest installing using pip:
pip install nibabel
pip install dipy
==pyAFQ==
While we are still in the processing of developing a full implementation of AFQ in Python, some of these files are required for running DKI.
cd ~/git
git clone https://github.com/yeatmanlab/pyAFQ
cd pyAFQ
python setup.py develop
pip install boto3
==Neurodebian repo==
The [http://neuro.debian.net/ neurodebian project] is an incredible resource making it easy to install most of the widely used nueroimaging software packages.
wget -O- http://neuro.debian.net/lists/trusty.us-nh.full | sudo tee /etc/apt/sources.list.d/neurodebian.sources.list
sudo apt-key adv --recv-keys --keyserver hkp://pgp.mit.edu:80 0xA5D32F012649A5A9
sudo apt-get update
==FSL, ANTS and other Neurodebian packages==
Once you have added the neurodebian repos you can easily install fsl and other packages
sudo apt-get install fsl-5.0-complete
sudo apt-get install fsl-5.0-eddy-nonfree
sudo apt-get install ants
FSL requires an edit to the .bashrc file
gedit ~/.bashrc
Copy the following into the file that open
# FSL setup
. /etc/fsl/5.0/fsl.sh
==SPM8==
Fill out the form located [http://www.fil.ion.ucl.ac.uk/spm/software/download/ here] in order, making sure to select SPM8. Run this script from the terminal to install.
unzip ~/Downloads/spm8.zip -d ~/Documents/MATLAB/
==Psychtoolbox for Matlab==
Requires Neurodebian repo to install.
sudo apt-get install matlab-psychtoolbox-3
In order to launch Matlab with Psychtoolbox, use terminal command
ptb3-matlab
toibmsx652ipghlq7jeoqwdscl4ptm6
307
306
2016-07-13T17:59:21Z
Jyeatman
1
/* Setting up Matlab for AFQ */
wikitext
text/x-wiki
We rely on a number of different software packages to analyze MRI and MEG data. Here is a growing tally of what we use and where to get it. Most of these tools reside on github and you should definitely use git to clone the repositories rather than download a snapshot of the code.
==Setting up a user account==
To add a new user with sudo privileges (userid should be maintained across all systems).
sudo useradd -m -G bde,sudo -s /bin/bash userid
To set user password.
sudo passwd userid
Have new user enter desired password.
==Git==
All git directories should be maintained together.
mkdir ~/git
==Yeatman Lab Tools==
cd ~/git
git clone https://github.com/yeatmanlab/BrainTools.git
==Vistasoft==
MATLAB based toolbox, from Brian Wandell's lab at Stanford, that contains many functions we rely on for analyzing diffusion MRI and functional MRI data
cd ~/git
git clone https://github.com/vistalab/vistasoft.git
==Automated Fiber Quantification==
cd ~/git
git clone https://github.com/jyeatman/AFQ.git
==Setting up Matlab for AFQ==
AFQ requires Matlab r2014a, SPM8, Vistasoft, and AFQ. If you wish to use a different version of Matlab for your core coding, you can set aliases in your ~/.bashrc file
alias matlabr2016="/usr/local/MATLAB/R2016a/bin/matlab"
For Matlab to see all of the necessary code create a file named startup.m and place it in your home directory. The startup file gets executed automatically when Matlab starts up. To add folders of code to your Matlab search path (making it visible to Matlab):
addpath(genpath('~/Documents/MATLAB/spm8/'))
addpath(genpath('~/git/AFQ/'))
addpath(genpath('~/git/BrainTools/'))
addpath(genpath('~/git/vistasoft/'))
==Anaconda==
Each user should have anaconda set up to manage their Python packages. Do this before installing nibabel and dipy. Anaconda will manage versions and dependencies for each user separately. Each install will be specific to the user that installs it. Do not install as root, as you will deny yourself privileges to your own directory. You must restart your terminal session after install before using Python or the conda command. See instructions here:
https://www.continuum.io/downloads
==nibabel and DIPY==
Python based toolbox for dealing with nifti images. While nibabel is on github we suggest installing using pip:
pip install nibabel
pip install dipy
==pyAFQ==
While we are still in the processing of developing a full implementation of AFQ in Python, some of these files are required for running DKI.
cd ~/git
git clone https://github.com/yeatmanlab/pyAFQ
cd pyAFQ
python setup.py develop
pip install boto3
==Neurodebian repo==
The [http://neuro.debian.net/ neurodebian project] is an incredible resource making it easy to install most of the widely used nueroimaging software packages.
wget -O- http://neuro.debian.net/lists/trusty.us-nh.full | sudo tee /etc/apt/sources.list.d/neurodebian.sources.list
sudo apt-key adv --recv-keys --keyserver hkp://pgp.mit.edu:80 0xA5D32F012649A5A9
sudo apt-get update
==FSL, ANTS and other Neurodebian packages==
Once you have added the neurodebian repos you can easily install fsl and other packages
sudo apt-get install fsl-5.0-complete
sudo apt-get install fsl-5.0-eddy-nonfree
sudo apt-get install ants
FSL requires an edit to the .bashrc file
gedit ~/.bashrc
Copy the following into the file that open
# FSL setup
. /etc/fsl/5.0/fsl.sh
==SPM8==
Fill out the form located [http://www.fil.ion.ucl.ac.uk/spm/software/download/ here] in order, making sure to select SPM8. Run this script from the terminal to install.
unzip ~/Downloads/spm8.zip -d ~/Documents/MATLAB/
==Psychtoolbox for Matlab==
Requires Neurodebian repo to install.
sudo apt-get install matlab-psychtoolbox-3
In order to launch Matlab with Psychtoolbox, use terminal command
ptb3-matlab
8og5dzgxqycfo2w2b9b932rtjctfbz6
308
307
2016-07-13T18:16:55Z
Dstrodtman
5
wikitext
text/x-wiki
We rely on a number of different software packages to analyze MRI and MEG data. Here is a growing tally of what we use and where to get it. Most of these tools reside on github and you should definitely use git to clone the repositories rather than download a snapshot of the code.
==Setting up a user account==
To add a new user with sudo privileges (userid should be maintained across all systems).
sudo useradd -m -G bde,sudo -s /bin/bash userid
To set user password.
sudo passwd userid
Have new user enter desired password.
==Git==
All git directories should be maintained together.
mkdir ~/git
==Yeatman Lab Tools==
cd ~/git
git clone https://github.com/yeatmanlab/BrainTools.git
==Vistasoft==
MATLAB based toolbox, from Brian Wandell's lab at Stanford, that contains many functions we rely on for analyzing diffusion MRI and functional MRI data
cd ~/git
git clone https://github.com/vistalab/vistasoft.git
==Automated Fiber Quantification==
cd ~/git
git clone https://github.com/jyeatman/AFQ.git
==Setting up Matlab for AFQ==
AFQ requires Matlab r2014a, SPM8, Vistasoft, and AFQ. If you wish to use a different version of Matlab for your core coding, you can set aliases in your ~/.bashrc file
alias matlabr2016="/usr/local/MATLAB/R2016a/bin/matlab"
For Matlab to see all of the necessary code create a file named startup.m and place it in your home directory. The startup file gets executed automatically when Matlab starts up. To add folders of code to your Matlab search path (making it visible to Matlab):
addpath(genpath('~/Documents/MATLAB/spm8/'))
addpath(genpath('~/git/AFQ/'))
addpath(genpath('~/git/BrainTools/'))
addpath(genpath('~/git/vistasoft/'))
==Anaconda==
Each user should have anaconda set up to manage their Python packages. Do this before installing nibabel and dipy. Anaconda will manage versions and dependencies for each user separately. Each install will be specific to the user that installs it. Do not install as root, as you will deny yourself privileges to your own directory. You must restart your terminal session after install before using Python or the conda command. See instructions here:
https://www.continuum.io/downloads
==nibabel and DIPY==
Python based toolbox for dealing with nifti images. While nibabel is on github we suggest installing using pip:
pip install nibabel
pip install dipy
==pyAFQ==
While we are still in the processing of developing a full implementation of AFQ in Python, some of these files are required for running DKI.
cd ~/git
git clone https://github.com/yeatmanlab/pyAFQ
cd pyAFQ
python setup.py develop
pip install boto3
==Neurodebian repo==
The [http://neuro.debian.net/ neurodebian project] is an incredible resource making it easy to install most of the widely used nueroimaging software packages.
wget -O- http://neuro.debian.net/lists/trusty.us-nh.full | sudo tee /etc/apt/sources.list.d/neurodebian.sources.list
sudo apt-key adv --recv-keys --keyserver hkp://pgp.mit.edu:80 0xA5D32F012649A5A9
sudo apt-get update
==FSL, ANTS and other Neurodebian packages==
Once you have added the neurodebian repos you can easily install fsl and other packages
sudo apt-get install fsl-5.0-complete
sudo apt-get install fsl-5.0-eddy-nonfree
sudo apt-get install ants
FSL requires an edit to the .bashrc file
gedit ~/.bashrc
Copy the following into the file that open
# FSL setup
. /etc/fsl/5.0/fsl.sh
==Freesurfer==
The steps and download is [https://surfer.nmr.mgh.harvard.edu/fswiki/DownloadAndInstall here], instructions are repeated below. Download the Linux CentOS 6 x86_64 install file.
cd ~/Downloads
sudo tar -C /usr/local -xzvf freesurfer-Linux-centos6_x86_64-stable-pub-v5.3.0.tar.gz
You'll need to register [https://surfer.nmr.mgh.harvard.edu/registration.html here]. The registration e-mail will tell you which text to copy into a new file, which you can create/edit by
sudo gedit /usr/local/freesurfer/license.txt
We'll be creating a directory for processed subject files on the scratch drive.
mkdir /mnt/scratch/projects/freesurfer
Finally, you'll need to edit your bashrc file.
gedit ~/.bashrc
Copy the following text
export FREESURFER_HOME=/usr/local/freesurfer
source $FREESURFER_HOME/SetUpFreeSurfer.sh > /dev/null
export SUBJECTS_DIR=/mnt/scratch/projects/freesurfer
==SPM8==
Fill out the form located [http://www.fil.ion.ucl.ac.uk/spm/software/download/ here] in order, making sure to select SPM8. Run this script from the terminal to install.
unzip ~/Downloads/spm8.zip -d ~/Documents/MATLAB/
==Psychtoolbox for Matlab==
Requires Neurodebian repo to install.
sudo apt-get install matlab-psychtoolbox-3
In order to launch Matlab with Psychtoolbox, use terminal command
ptb3-matlab
nachz2zg6t94zz63w5vqcnca76sxkbu
309
308
2016-07-13T18:20:33Z
Dstrodtman
5
/* Freesurfer */
wikitext
text/x-wiki
We rely on a number of different software packages to analyze MRI and MEG data. Here is a growing tally of what we use and where to get it. Most of these tools reside on github and you should definitely use git to clone the repositories rather than download a snapshot of the code.
==Setting up a user account==
To add a new user with sudo privileges (userid should be maintained across all systems).
sudo useradd -m -G bde,sudo -s /bin/bash userid
To set user password.
sudo passwd userid
Have new user enter desired password.
==Git==
All git directories should be maintained together.
mkdir ~/git
==Yeatman Lab Tools==
cd ~/git
git clone https://github.com/yeatmanlab/BrainTools.git
==Vistasoft==
MATLAB based toolbox, from Brian Wandell's lab at Stanford, that contains many functions we rely on for analyzing diffusion MRI and functional MRI data
cd ~/git
git clone https://github.com/vistalab/vistasoft.git
==Automated Fiber Quantification==
cd ~/git
git clone https://github.com/jyeatman/AFQ.git
==Setting up Matlab for AFQ==
AFQ requires Matlab r2014a, SPM8, Vistasoft, and AFQ. If you wish to use a different version of Matlab for your core coding, you can set aliases in your ~/.bashrc file
alias matlabr2016="/usr/local/MATLAB/R2016a/bin/matlab"
For Matlab to see all of the necessary code create a file named startup.m and place it in your home directory. The startup file gets executed automatically when Matlab starts up. To add folders of code to your Matlab search path (making it visible to Matlab):
addpath(genpath('~/Documents/MATLAB/spm8/'))
addpath(genpath('~/git/AFQ/'))
addpath(genpath('~/git/BrainTools/'))
addpath(genpath('~/git/vistasoft/'))
==Anaconda==
Each user should have anaconda set up to manage their Python packages. Do this before installing nibabel and dipy. Anaconda will manage versions and dependencies for each user separately. Each install will be specific to the user that installs it. Do not install as root, as you will deny yourself privileges to your own directory. You must restart your terminal session after install before using Python or the conda command. See instructions here:
https://www.continuum.io/downloads
==nibabel and DIPY==
Python based toolbox for dealing with nifti images. While nibabel is on github we suggest installing using pip:
pip install nibabel
pip install dipy
==pyAFQ==
While we are still in the processing of developing a full implementation of AFQ in Python, some of these files are required for running DKI.
cd ~/git
git clone https://github.com/yeatmanlab/pyAFQ
cd pyAFQ
python setup.py develop
pip install boto3
==Neurodebian repo==
The [http://neuro.debian.net/ neurodebian project] is an incredible resource making it easy to install most of the widely used nueroimaging software packages.
wget -O- http://neuro.debian.net/lists/trusty.us-nh.full | sudo tee /etc/apt/sources.list.d/neurodebian.sources.list
sudo apt-key adv --recv-keys --keyserver hkp://pgp.mit.edu:80 0xA5D32F012649A5A9
sudo apt-get update
==FSL, ANTS and other Neurodebian packages==
Once you have added the neurodebian repos you can easily install fsl and other packages
sudo apt-get install fsl-5.0-complete
sudo apt-get install fsl-5.0-eddy-nonfree
sudo apt-get install ants
FSL requires an edit to the .bashrc file
gedit ~/.bashrc
Copy the following into the file that open
# FSL setup
. /etc/fsl/5.0/fsl.sh
==Freesurfer==
The steps and download is [https://surfer.nmr.mgh.harvard.edu/fswiki/DownloadAndInstall here], instructions are repeated below. Download the Linux CentOS 6 x86_64 install file.
cd ~/Downloads
sudo tar -C /usr/local -xzvf freesurfer-Linux-centos6_x86_64-stable-pub-v5.3.0.tar.gz
You'll need to register [https://surfer.nmr.mgh.harvard.edu/registration.html here]. The registration e-mail will tell you which text to copy into a new file, which you can create/edit by
sudo gedit /usr/local/freesurfer/license.txt
We'll be creating a directory for processed subject files on the scratch drive.
mkdir -p /mnt/scratch/projects/freesurfer
Finally, you'll need to edit your bashrc file.
gedit ~/.bashrc
Copy the following text
export FREESURFER_HOME=/usr/local/freesurfer
source $FREESURFER_HOME/SetUpFreeSurfer.sh > /dev/null
export SUBJECTS_DIR=/mnt/scratch/projects/freesurfer
==SPM8==
Fill out the form located [http://www.fil.ion.ucl.ac.uk/spm/software/download/ here] in order, making sure to select SPM8. Run this script from the terminal to install.
unzip ~/Downloads/spm8.zip -d ~/Documents/MATLAB/
==Psychtoolbox for Matlab==
Requires Neurodebian repo to install.
sudo apt-get install matlab-psychtoolbox-3
In order to launch Matlab with Psychtoolbox, use terminal command
ptb3-matlab
jfkxdgfwkhtm6icig4m8r4h6xhgtu25
310
309
2016-07-13T19:08:04Z
Dstrodtman
5
/* Freesurfer */
wikitext
text/x-wiki
We rely on a number of different software packages to analyze MRI and MEG data. Here is a growing tally of what we use and where to get it. Most of these tools reside on github and you should definitely use git to clone the repositories rather than download a snapshot of the code.
==Setting up a user account==
To add a new user with sudo privileges (userid should be maintained across all systems).
sudo useradd -m -G bde,sudo -s /bin/bash userid
To set user password.
sudo passwd userid
Have new user enter desired password.
==Git==
All git directories should be maintained together.
mkdir ~/git
==Yeatman Lab Tools==
cd ~/git
git clone https://github.com/yeatmanlab/BrainTools.git
==Vistasoft==
MATLAB based toolbox, from Brian Wandell's lab at Stanford, that contains many functions we rely on for analyzing diffusion MRI and functional MRI data
cd ~/git
git clone https://github.com/vistalab/vistasoft.git
==Automated Fiber Quantification==
cd ~/git
git clone https://github.com/jyeatman/AFQ.git
==Setting up Matlab for AFQ==
AFQ requires Matlab r2014a, SPM8, Vistasoft, and AFQ. If you wish to use a different version of Matlab for your core coding, you can set aliases in your ~/.bashrc file
alias matlabr2016="/usr/local/MATLAB/R2016a/bin/matlab"
For Matlab to see all of the necessary code create a file named startup.m and place it in your home directory. The startup file gets executed automatically when Matlab starts up. To add folders of code to your Matlab search path (making it visible to Matlab):
addpath(genpath('~/Documents/MATLAB/spm8/'))
addpath(genpath('~/git/AFQ/'))
addpath(genpath('~/git/BrainTools/'))
addpath(genpath('~/git/vistasoft/'))
==Anaconda==
Each user should have anaconda set up to manage their Python packages. Do this before installing nibabel and dipy. Anaconda will manage versions and dependencies for each user separately. Each install will be specific to the user that installs it. Do not install as root, as you will deny yourself privileges to your own directory. You must restart your terminal session after install before using Python or the conda command. See instructions here:
https://www.continuum.io/downloads
==nibabel and DIPY==
Python based toolbox for dealing with nifti images. While nibabel is on github we suggest installing using pip:
pip install nibabel
pip install dipy
==pyAFQ==
While we are still in the processing of developing a full implementation of AFQ in Python, some of these files are required for running DKI.
cd ~/git
git clone https://github.com/yeatmanlab/pyAFQ
cd pyAFQ
python setup.py develop
pip install boto3
==Neurodebian repo==
The [http://neuro.debian.net/ neurodebian project] is an incredible resource making it easy to install most of the widely used nueroimaging software packages.
wget -O- http://neuro.debian.net/lists/trusty.us-nh.full | sudo tee /etc/apt/sources.list.d/neurodebian.sources.list
sudo apt-key adv --recv-keys --keyserver hkp://pgp.mit.edu:80 0xA5D32F012649A5A9
sudo apt-get update
==FSL, ANTS and other Neurodebian packages==
Once you have added the neurodebian repos you can easily install fsl and other packages
sudo apt-get install fsl-5.0-complete
sudo apt-get install fsl-5.0-eddy-nonfree
sudo apt-get install ants
FSL requires an edit to the .bashrc file
gedit ~/.bashrc
Copy the following into the file that open
# FSL setup
. /etc/fsl/5.0/fsl.sh
==Freesurfer==
The steps and download is [https://surfer.nmr.mgh.harvard.edu/fswiki/DownloadAndInstall here], instructions are repeated below. Download the Linux CentOS 6 x86_64 install file.
cd ~/Downloads
sudo tar -C /usr/local -xzvf freesurfer-Linux-centos6_x86_64-stable-pub-v5.3.0.tar.gz
You'll need to register [https://surfer.nmr.mgh.harvard.edu/registration.html here]. The registration e-mail will tell you which text to copy into a new file, which you can create/edit by
sudo gedit /usr/local/freesurfer/license.txt
We'll be creating a directory for processed subject files on the scratch drive.
mkdir -p /mnt/scratch/projects/freesurfer
We need to resolve an issue with a jpeg library by creating a symbolic link.
cd /usr/lib/x86_64-linux-gnu
sudo ln -s libjpeg.so.8 libjpeg.so.62
Finally, you'll need to edit your bashrc file.
gedit ~/.bashrc
Copy the following text
export FREESURFER_HOME=/usr/local/freesurfer
source $FREESURFER_HOME/SetUpFreeSurfer.sh > /dev/null
export SUBJECTS_DIR=/mnt/scratch/projects/freesurfer
==SPM8==
Fill out the form located [http://www.fil.ion.ucl.ac.uk/spm/software/download/ here] in order, making sure to select SPM8. Run this script from the terminal to install.
unzip ~/Downloads/spm8.zip -d ~/Documents/MATLAB/
==Psychtoolbox for Matlab==
Requires Neurodebian repo to install.
sudo apt-get install matlab-psychtoolbox-3
In order to launch Matlab with Psychtoolbox, use terminal command
ptb3-matlab
lpzd5a5zclfusejwcspbuje0axycmg4
System Setup
0
51
293
2016-06-13T19:56:52Z
Dstrodtman
5
wikitext
text/x-wiki
This page will eventually have a detailed description for a clean install/upgrade of the lab OS. For now, helpful tips will be posted here to make sure somewhat complicated tasks are done correctly. In all examples
userid
should be replaced with the currently active userid (your login).
==Mount Drives==
If you have installed a new hard drive or have upgraded or changed your OS, you may need to remount your scratch drives and give yourself permissions to access them. Before beginning, you should make sure that you have sudo privileges and membership in the bde group. To check this, in the terminal, type
groups userid
If you are not a member of the group sudo, you will need to request sudo privileges from another member of the lab. If you are not a member of bde,
sudo groupadd bde
sudo usermod -G bde
Now, to find the UUID of the drive you are mounting:
sudo blkid
The desired drive should be listed as /dev/sbd (with a number after it if you have several additional drives installed).
Edit the fstab file to specify mount instructions.
sudo gedit /etc/fstab
Copy the following code, replacing the example UUID with that specific to your machine:
# Mount for scratch space
UUID=7fx5ebx6-b1x2-47ce-882b-af780899189a /mnt/scratch ext4 defaults 0 0
Save and exit the file.
To mount,
sudo mount -a
If you get an error telling you scratch does not exist,
sudo mkdir /mnt/scratch
Now we set permissions:
FINISH EDITING THIS SECTION
==Printer Setup==
Our mighty printer, [https://www.youtube.com/watch?v=m96VcCF4Ess Elwha], never jams, and prints duplex and in color.
ping elwha.ilabs.uw.edu
Note the IP address listed.
sudo system-config-printer
Click add, expand the selection for network printer, and find the printer tied to the IP address you got from pinging Elwha (HP Color LaserJet MFP M477fdn). Use the recommended drivers.
ib5y7udd7g77k81ev0h89hmsq9untua
294
293
2016-06-13T20:31:52Z
Dstrodtman
5
wikitext
text/x-wiki
This page will eventually have a detailed description for a clean install/upgrade of the lab OS. For now, helpful tips will be posted here to make sure somewhat complicated tasks are done correctly. In all examples
userid
should be replaced with the currently active userid (your login).
==Mount Drives==
If you have installed a new hard drive or have upgraded or changed your OS, you may need to remount your scratch drives and give yourself permissions to access them. Before beginning, you should make sure that you have sudo privileges and membership in the bde group. To check this, in the terminal, type
groups userid
If you are not a member of the group sudo, you will need to request sudo privileges from another member of the lab. If you are not a member of bde,
sudo groupadd bde
sudo usermod -G bde
Now, to find the UUID of the drive you are mounting:
sudo blkid
The desired drive should be listed as /dev/sbd (with a number after it if you have several additional drives installed).
Edit the fstab file to specify mount instructions.
sudo gedit /etc/fstab
Copy the following code, replacing the example UUID with that specific to your machine:
# Mount for scratch space
UUID=7fx5ebx6-b1x2-47ce-882b-af780899189a /mnt/scratch ext4 defaults 0 0
Save and exit the file.
To mount,
sudo mount -a
If you get an error telling you scratch does not exist,
sudo mkdir /mnt/scratch
Now we set permissions:
sudo chown -R userid:bde /mnt/scratch
==Printer Setup==
Our mighty printer, [https://www.youtube.com/watch?v=m96VcCF4Ess Elwha], never jams, and prints duplex and in color.
ping elwha.ilabs.uw.edu
Note the IP address listed.
sudo system-config-printer
Click add, expand the selection for network printer, and find the printer tied to the IP address you got from pinging Elwha (HP Color LaserJet MFP M477fdn). Use the recommended drivers.
j1xxb0xn6iqkvhpe3hkr4upy2woadam
298
294
2016-06-14T23:11:08Z
Dstrodtman
5
wikitext
text/x-wiki
This page will eventually have a detailed description for a clean install/upgrade of the lab OS. For now, helpful tips will be posted here to make sure somewhat complicated tasks are done correctly. In all examples
userid
should be replaced with the currently active userid (your login).
==Static IP Address==
You will automatically pull an IP address from the network when you plug in, but in order to be able to ssh into your machine remotely and communicate on the cluster, you will need a static IP address set. This can be done through the network connections setting in the GUI (lower right in Mint). Click network connections, select your currently active connection, click the IPv4 Settings tab, select 'Manual' from the drop down Method: list. Fill out the information found under 'Networking' in the lab handbook.
==Mount Drives==
If you have installed a new hard drive or have upgraded or changed your OS, you may need to remount your scratch drives and give yourself permissions to access them. Before beginning, you should make sure that you have sudo privileges and membership in the bde group. To check this, in the terminal, type
groups userid
If you are not a member of the group sudo, you will need to request sudo privileges from another member of the lab. If you are not a member of bde,
sudo groupadd bde
sudo usermod -G bde
Now, to find the UUID of the drive you are mounting:
sudo blkid
The desired drive should be listed as /dev/sbd (with a number after it if you have several additional drives installed).
Edit the fstab file to specify mount instructions.
sudo gedit /etc/fstab
Copy the following code, replacing the example UUID with that specific to your machine:
# Mount for scratch space
UUID=7fx5ebx6-b1x2-47ce-882b-af780899189a /mnt/scratch ext4 defaults 0 0
Save and exit the file.
To mount,
sudo mount -a
If you get an error telling you scratch does not exist,
sudo mkdir /mnt/scratch
Now we set permissions:
sudo chown -R userid:bde /mnt/scratch
==Printer Setup==
Our mighty printer, [https://www.youtube.com/watch?v=m96VcCF4Ess Elwha], never jams, and prints duplex and in color.
ping elwha.ilabs.uw.edu
Note the IP address listed.
sudo system-config-printer
Click add, expand the selection for network printer, and find the printer tied to the IP address you got from pinging Elwha (HP Color LaserJet MFP M477fdn). Use the recommended drivers.
8s9kkggtirl6rfnq8zlx66crxwhzatg
301
298
2016-06-20T21:05:28Z
Dstrodtman
5
wikitext
text/x-wiki
This page will eventually have a detailed description for a clean install/upgrade of the lab OS. For now, helpful tips will be posted here to make sure somewhat complicated tasks are done correctly. In all examples
userid
should be replaced with the currently active userid (your login).
==Static IP Address==
You will automatically pull an IP address from the network when you plug in, but in order to be able to ssh into your machine remotely and communicate on the cluster, you will need a static IP address set. This can be done through the network connections setting in the GUI (lower right in Mint). Click network connections, select your currently active connection, click the IPv4 Settings tab, select 'Manual' from the drop down Method: list. Fill out the information found under 'Networking' in the lab handbook.
==New Hard Drive==
If you installed a new hard drive on your machine, first find out where it is located. Take note of the logical name printed out by the following:
sudo lshw -C disk
This should be /dev/sd'''x''' (replace this x with the letter printed). To format the drive as one partition,
sudo mkfs -t ext4 /dev/sdx
==Mount Drives==
If you have installed a new hard drive or have upgraded or changed your OS, you may need to remount your scratch drives and give yourself permissions to access them. Before beginning, you should make sure that you have sudo privileges and membership in the bde group. To check this, in the terminal, type
groups userid
If you are not a member of the group sudo, you will need to request sudo privileges from another member of the lab. If you are not a member of bde,
sudo groupadd bde
sudo usermod -G bde
Now, to find the UUID of the drive you are mounting:
sudo blkid
The desired drive should be listed as /dev/sbd (with a number after it if you have several additional drives installed).
Edit the fstab file to specify mount instructions.
sudo gedit /etc/fstab
Copy the following code, replacing the example UUID with that specific to your machine:
# Mount for scratch space
UUID=7fx5ebx6-b1x2-47ce-882b-af780899189a /mnt/scratch ext4 defaults 0 0
Save and exit the file.
To mount,
sudo mount -a
If you get an error telling you scratch does not exist,
sudo mkdir /mnt/scratch
Now we set permissions:
sudo chown -R userid:bde /mnt/scratch
==Printer Setup==
Our mighty printer, [https://www.youtube.com/watch?v=m96VcCF4Ess Elwha], never jams, and prints duplex and in color.
ping elwha.ilabs.uw.edu
Note the IP address listed.
sudo system-config-printer
Click add, expand the selection for network printer, and find the printer tied to the IP address you got from pinging Elwha (HP Color LaserJet MFP M477fdn). Use the recommended drivers.
ib68mf6qlk0raebe8t8wb048h9gyp6v
302
301
2016-06-20T21:05:53Z
Dstrodtman
5
/* New Hard Drive */
wikitext
text/x-wiki
This page will eventually have a detailed description for a clean install/upgrade of the lab OS. For now, helpful tips will be posted here to make sure somewhat complicated tasks are done correctly. In all examples
userid
should be replaced with the currently active userid (your login).
==Static IP Address==
You will automatically pull an IP address from the network when you plug in, but in order to be able to ssh into your machine remotely and communicate on the cluster, you will need a static IP address set. This can be done through the network connections setting in the GUI (lower right in Mint). Click network connections, select your currently active connection, click the IPv4 Settings tab, select 'Manual' from the drop down Method: list. Fill out the information found under 'Networking' in the lab handbook.
==New Hard Drive==
If you installed a new hard drive on your machine, first find out where it is located. Take note of the logical name printed out by the following:
sudo lshw -C disk
This should be /dev/sd'''x''' (replace this x with the letter printed). To format the drive as one partition,
sudo mkfs -t ext4 /dev/sd'''x'''
==Mount Drives==
If you have installed a new hard drive or have upgraded or changed your OS, you may need to remount your scratch drives and give yourself permissions to access them. Before beginning, you should make sure that you have sudo privileges and membership in the bde group. To check this, in the terminal, type
groups userid
If you are not a member of the group sudo, you will need to request sudo privileges from another member of the lab. If you are not a member of bde,
sudo groupadd bde
sudo usermod -G bde
Now, to find the UUID of the drive you are mounting:
sudo blkid
The desired drive should be listed as /dev/sbd (with a number after it if you have several additional drives installed).
Edit the fstab file to specify mount instructions.
sudo gedit /etc/fstab
Copy the following code, replacing the example UUID with that specific to your machine:
# Mount for scratch space
UUID=7fx5ebx6-b1x2-47ce-882b-af780899189a /mnt/scratch ext4 defaults 0 0
Save and exit the file.
To mount,
sudo mount -a
If you get an error telling you scratch does not exist,
sudo mkdir /mnt/scratch
Now we set permissions:
sudo chown -R userid:bde /mnt/scratch
==Printer Setup==
Our mighty printer, [https://www.youtube.com/watch?v=m96VcCF4Ess Elwha], never jams, and prints duplex and in color.
ping elwha.ilabs.uw.edu
Note the IP address listed.
sudo system-config-printer
Click add, expand the selection for network printer, and find the printer tied to the IP address you got from pinging Elwha (HP Color LaserJet MFP M477fdn). Use the recommended drivers.
kiyli520keqkaksshybn04dijlq4m6v
311
302
2016-07-27T17:24:53Z
Dstrodtman
5
/* Mount Drives */
wikitext
text/x-wiki
This page will eventually have a detailed description for a clean install/upgrade of the lab OS. For now, helpful tips will be posted here to make sure somewhat complicated tasks are done correctly. In all examples
userid
should be replaced with the currently active userid (your login).
==Static IP Address==
You will automatically pull an IP address from the network when you plug in, but in order to be able to ssh into your machine remotely and communicate on the cluster, you will need a static IP address set. This can be done through the network connections setting in the GUI (lower right in Mint). Click network connections, select your currently active connection, click the IPv4 Settings tab, select 'Manual' from the drop down Method: list. Fill out the information found under 'Networking' in the lab handbook.
==New Hard Drive==
If you installed a new hard drive on your machine, first find out where it is located. Take note of the logical name printed out by the following:
sudo lshw -C disk
This should be /dev/sd'''x''' (replace this x with the letter printed). To format the drive as one partition,
sudo mkfs -t ext4 /dev/sd'''x'''
==Mount Drives==
If you have installed a new hard drive or have upgraded or changed your OS, you may need to remount your scratch drives and give yourself permissions to access them. Before beginning, you should make sure that you have sudo privileges and membership in the bde group. To check this, in the terminal, type
groups userid
If you are not a member of the group sudo, you will need to request sudo privileges from another member of the lab. If you are not a member of bde,
sudo groupadd bde
sudo usermod -G -a bde
Now, to find the UUID of the drive you are mounting:
sudo blkid
The desired drive should be listed as /dev/sbd (with a number after it if you have several additional drives installed).
Edit the fstab file to specify mount instructions.
sudo gedit /etc/fstab
Copy the following code, replacing the example UUID with that specific to your machine:
# Mount for scratch space
UUID=7fx5ebx6-b1x2-47ce-882b-af780899189a /mnt/scratch ext4 defaults 0 0
Save and exit the file.
To mount,
sudo mount -a
If you get an error telling you scratch does not exist,
sudo mkdir /mnt/scratch
Now we set permissions:
sudo chown -R userid:bde /mnt/scratch
==Printer Setup==
Our mighty printer, [https://www.youtube.com/watch?v=m96VcCF4Ess Elwha], never jams, and prints duplex and in color.
ping elwha.ilabs.uw.edu
Note the IP address listed.
sudo system-config-printer
Click add, expand the selection for network printer, and find the printer tied to the IP address you got from pinging Elwha (HP Color LaserJet MFP M477fdn). Use the recommended drivers.
19qreqarcaoly2eymx28xi568lqlar3
Wiki tips
0
50
292
2016-06-13T17:09:43Z
Dstrodtman
5
Created page with "Editing MediaWiki pages is easy. All users with an account should have privileges to add and edit pages. ==Adding Pages== When adding a new page, it is a good idea to first..."
wikitext
text/x-wiki
Editing MediaWiki pages is easy. All users with an account should have privileges to add and edit pages.
==Adding Pages==
When adding a new page, it is a good idea to first create the organizational structure in the sidebar. Access the sidebar by navigating [http://depts.washington.edu/bdelab/wiki/index.php?title=MediaWiki:Sidebar here] and clicking the edit tab. Follow the format:
* marks a section heading
** creates a page link (page_title|Page Title)
n65g59d9gv0gynln474ubxi0srsb3gz
File:Ac-pc.jpg
6
8
40
2015-09-03T19:58:06Z
Jyeatman
1
wikitext
text/x-wiki
phoiac9h4m842xq45sp7s6u21eteeq1
File:CueGabors3.jpg
6
22
141
2015-11-18T23:14:45Z
Pdonnelly
2
wikitext
text/x-wiki
phoiac9h4m842xq45sp7s6u21eteeq1
File:CueGaborsBigTilt2.jpg
6
23
142
2015-11-18T23:16:04Z
Pdonnelly
2
wikitext
text/x-wiki
phoiac9h4m842xq45sp7s6u21eteeq1
File:CuedGabor.jpg
6
27
161
2015-11-19T18:41:46Z
Pdonnelly
2
wikitext
text/x-wiki
phoiac9h4m842xq45sp7s6u21eteeq1
File:CuedGabor2.jpg
6
28
162
2015-11-19T18:42:05Z
Pdonnelly
2
wikitext
text/x-wiki
phoiac9h4m842xq45sp7s6u21eteeq1
File:FeedbackPointsCorrect.jpg
6
32
184
2015-12-02T21:59:13Z
Pdonnelly
2
wikitext
text/x-wiki
phoiac9h4m842xq45sp7s6u21eteeq1
File:FeedbackPointsError.jpg
6
33
185
2015-12-02T22:00:06Z
Pdonnelly
2
wikitext
text/x-wiki
phoiac9h4m842xq45sp7s6u21eteeq1
File:FeedbackStar.jpg
6
25
149
2015-11-18T23:49:20Z
Pdonnelly
2
wikitext
text/x-wiki
phoiac9h4m842xq45sp7s6u21eteeq1
163
149
2015-11-19T18:42:43Z
Pdonnelly
2
Pdonnelly uploaded a new version of [[File:FeedbackStar.jpg]]
wikitext
text/x-wiki
phoiac9h4m842xq45sp7s6u21eteeq1
File:Gabors1.jpg
6
19
137
2015-11-18T22:42:54Z
Pdonnelly
2
wikitext
text/x-wiki
phoiac9h4m842xq45sp7s6u21eteeq1
File:Gabors2.jpg
6
20
138
2015-11-18T22:52:32Z
Pdonnelly
2
wikitext
text/x-wiki
phoiac9h4m842xq45sp7s6u21eteeq1
164
138
2015-11-19T18:46:50Z
Pdonnelly
2
Pdonnelly uploaded a new version of [[File:Gabors2.jpg]]
wikitext
text/x-wiki
phoiac9h4m842xq45sp7s6u21eteeq1
File:GaborsCloseUp.jpg
6
26
159
2015-11-19T17:47:06Z
Pdonnelly
2
wikitext
text/x-wiki
phoiac9h4m842xq45sp7s6u21eteeq1
File:Inplaneview.png
6
38
227
2016-01-14T18:31:23Z
Jyeatman
1
wikitext
text/x-wiki
phoiac9h4m842xq45sp7s6u21eteeq1
File:LoadInverse-mne analyze.png
6
10
56
2015-10-23T19:14:55Z
Jyeatman
1
wikitext
text/x-wiki
phoiac9h4m842xq45sp7s6u21eteeq1
File:Mne analyze AdjustCoordAlign.png
6
12
61
2015-10-23T22:07:08Z
Jyeatman
1
wikitext
text/x-wiki
phoiac9h4m842xq45sp7s6u21eteeq1
File:Mne analyze viewer.png
6
13
62
2015-10-23T22:07:29Z
Jyeatman
1
wikitext
text/x-wiki
phoiac9h4m842xq45sp7s6u21eteeq1
File:MrInit.png
6
40
229
2016-01-14T18:32:08Z
Jyeatman
1
wikitext
text/x-wiki
phoiac9h4m842xq45sp7s6u21eteeq1
File:MrInit SessionDesc.png
6
39
228
2016-01-14T18:31:41Z
Jyeatman
1
wikitext
text/x-wiki
phoiac9h4m842xq45sp7s6u21eteeq1
File:RedCross.jpg
6
29
169
2015-11-19T18:53:07Z
Pdonnelly
2
wikitext
text/x-wiki
phoiac9h4m842xq45sp7s6u21eteeq1
File:SingleSmallTilt2.jpg
6
35
206
2015-12-10T23:44:00Z
Pdonnelly
2
wikitext
text/x-wiki
phoiac9h4m842xq45sp7s6u21eteeq1
208
206
2015-12-10T23:47:32Z
Pdonnelly
2
Pdonnelly uploaded a new version of [[File:SingleSmallTilt2.jpg]]
wikitext
text/x-wiki
phoiac9h4m842xq45sp7s6u21eteeq1
File:SingleStim3.jpg
6
24
143
2015-11-18T23:26:05Z
Pdonnelly
2
wikitext
text/x-wiki
phoiac9h4m842xq45sp7s6u21eteeq1
165
143
2015-11-19T18:47:18Z
Pdonnelly
2
Pdonnelly uploaded a new version of [[File:SingleStim3.jpg]]
wikitext
text/x-wiki
phoiac9h4m842xq45sp7s6u21eteeq1
File:SmallTilt.jpg
6
21
140
2015-11-18T22:58:17Z
Pdonnelly
2
wikitext
text/x-wiki
phoiac9h4m842xq45sp7s6u21eteeq1
166
140
2015-11-19T18:48:07Z
Pdonnelly
2
Pdonnelly uploaded a new version of [[File:SmallTilt.jpg]]
wikitext
text/x-wiki
phoiac9h4m842xq45sp7s6u21eteeq1
File:V3LgCueBot.jpg
6
47
278
2016-05-20T23:05:48Z
Pdonnelly
2
wikitext
text/x-wiki
phoiac9h4m842xq45sp7s6u21eteeq1
File:V3SmCueBot.png
6
48
279
2016-05-20T23:06:55Z
Pdonnelly
2
wikitext
text/x-wiki
phoiac9h4m842xq45sp7s6u21eteeq1
File:V3SmCueBotCrop.png
6
49
282
2016-06-06T22:45:04Z
Alexwhite
3
wikitext
text/x-wiki
phoiac9h4m842xq45sp7s6u21eteeq1
File:Viewer options.png
6
11
60
2015-10-23T22:05:47Z
Jyeatman
1
wikitext
text/x-wiki
phoiac9h4m842xq45sp7s6u21eteeq1
MediaWiki:Sidebar
8
3
4
2015-08-13T18:59:09Z
Jyeatman
1
Created page with " * navigation ** mainpage|mainpage-description ** dataanalysis|Data Analysis ** recentchanges-url|recentchanges ** randompage-url|randompage ** helppage|help * SEARCH * TOOLBO..."
wikitext
text/x-wiki
* navigation
** mainpage|mainpage-description
** dataanalysis|Data Analysis
** recentchanges-url|recentchanges
** randompage-url|randompage
** helppage|help
* SEARCH
* TOOLBOX
* LANGUAGES
4z15j8xthl0emweonlz3d8c9xq74xy5
5
4
2015-08-13T18:59:41Z
Jyeatman
1
wikitext
text/x-wiki
* navigation
** mainpage|mainpage-description
** data analysis|Data Analysis
** recentchanges-url|recentchanges
** randompage-url|randompage
** helppage|help
* SEARCH
* TOOLBOX
* LANGUAGES
f8vno2ncfm31l7qbc0gl5chxq5l0v8t
6
5
2015-08-13T19:00:32Z
Jyeatman
1
wikitext
text/x-wiki
* navigation
** mainpage|mainpage-description
** Data_Analysis|Data Analysis
** recentchanges-url|recentchanges
** randompage-url|randompage
** helppage|help
* SEARCH
* TOOLBOX
* LANGUAGES
swd1cwbomtsrdkhb9btblhlkh5ph5vr
7
6
2015-08-13T19:02:06Z
Jyeatman
1
wikitext
text/x-wiki
* navigation
** home|Home-description
** Data_Analysis|Data Analysis
** recentchanges-url|recentchanges
** randompage-url|randompage
** helppage|help
* SEARCH
* TOOLBOX
* LANGUAGES
guvahk8etmnsm93qlw1ctmt1mbw6bo9
8
7
2015-08-13T19:02:45Z
Jyeatman
1
wikitext
text/x-wiki
* navigation
** home|Home
** Data_Analysis|Data Analysis
** recentchanges-url|recentchanges
** randompage-url|randompage
** helppage|help
* SEARCH
* TOOLBOX
* LANGUAGES
1x4wgyxavu4b8icbhyeail05eo78e36
9
8
2015-08-13T19:03:42Z
Jyeatman
1
wikitext
text/x-wiki
* navigation
** Brain_Development_&_Education_Lab|Home
** Data_Analysis|Data Analysis
** recentchanges-url|recentchanges
** randompage-url|randompage
** helppage|help
* SEARCH
* TOOLBOX
* LANGUAGES
a1j69ujuxc6ut1f18oh86esemzl6cww
11
9
2015-08-13T19:09:00Z
Jyeatman
1
wikitext
text/x-wiki
* navigation
** Brain_Development_&_Education_Lab|Home
** Data_Analysis|Data Analysis
** recentchanges-url|recentchanges
** randompage-url|randompage
** helppage|help
* Lab Admin
* SEARCH
* TOOLBOX
* LANGUAGES
otllkkrmo7tft9ic44vicylyz265x7u
12
11
2015-08-13T19:10:20Z
Jyeatman
1
wikitext
text/x-wiki
* navigation
** Brain_Development_&_Education_Lab|Home
** recentchanges-url|recentchanges
** randompage-url|randompage
** helppage|help
* Acquiring and Analyzing Data
** Data_Analysis|Data Analysis
* Lab Admin
* SEARCH
* TOOLBOX
* LANGUAGES
rta28eseb49ljlot6zmlo1znr0gh6d0
13
12
2015-08-14T18:47:01Z
Jyeatman
1
wikitext
text/x-wiki
* navigation
** Brain_Development_&_Education_Lab|Home
** recentchanges-url|recentchanges
** helppage|help
* Data Analysis
** Anatomical_Data|Anatomical Data
** Data_Analysis|Data Analysis
* Lab Admin
* SEARCH
* TOOLBOX
* LANGUAGES
h0ttylhx9wbz8o5kyvy53md43llnx47
14
13
2015-08-14T19:21:51Z
Jyeatman
1
wikitext
text/x-wiki
* navigation
** Brain_Development_&_Education_Lab|Home
** recentchanges-url|recentchanges
** helppage|help
* Data Analysis
** Anatomical_Data|Anatomical Data
***Diffusion|Diffusion
** Data_Analysis|Data Analysis
* Lab Admin
* SEARCH
* TOOLBOX
* LANGUAGES
fzrn6owcda7t8hg7j9j5h352959n8xd
15
14
2015-08-14T19:27:57Z
Jyeatman
1
wikitext
text/x-wiki
* navigation
** Brain_Development_&_Education_Lab|Home
** recentchanges-url|recentchanges
** helppage|help
* Data Analysis
** Anatomy_Pipeline|Anatomy
**Diffusion_Pipeline|Diffusion
* Data Acquisition
**MRI
**MEG
**Behavioral
* Lab Admin
**IRB
* SEARCH
* TOOLBOX
* LANGUAGES
ihdossvn2e8nxvji5a3ucxhp2evtfvg
16
15
2015-08-14T19:29:29Z
Jyeatman
1
wikitext
text/x-wiki
* navigation
** Brain_Development_&_Education_Lab|Home
** recentchanges-url|recentchanges
** helppage|help
* Data Analysis
** Anatomy_Pipeline|Anatomy
**Diffusion_Pipeline|Diffusion
* Data Acquisition
**MRI_Data_Acquisition|MRI
**MEG_Data_Acquisition|MEG
**Behavioral|Behavioral
* Lab Admin
**IRB
* SEARCH
* TOOLBOX
* LANGUAGES
5mo164291cn55h2j71n7224d6bzutng
74
16
2015-10-23T23:25:29Z
Jyeatman
1
wikitext
text/x-wiki
* navigation
** Brain_Development_&_Education_Lab|Home
** recentchanges-url|recentchanges
** helppage|help
* Data Analysis
** Anatomy_Pipeline|Anatomy
** Diffusion_Pipeline|Diffusion
** MEG_Pipeline|MEG
* Data Acquisition
** MRI_Data_Acquisition|MRI
** MEG_Data_Acquisition|MEG
** Behavioral|Behavioral
* Lab Admin
** IRB|IRB
** ILABS_Brain_Seminar| ILABS Brain Seminar
* SEARCH
* TOOLBOX
* LANGUAGES
bnbcf2t6e2ubwmnk6b85lp3mw6qy5lo
110
74
2015-11-02T21:27:32Z
Jyeatman
1
wikitext
text/x-wiki
* navigation
** Brain_Development_&_Education_Lab|Home
** recentchanges-url|recentchanges
** helppage|help
* Data Analysis
** Anatomy_Pipeline|Anatomy
** Diffusion_Pipeline|Diffusion
** MEG_Pipeline|MEG
** Cortical_Thickness|Cortical_Thickness
* Data Acquisition
** MRI_Data_Acquisition|MRI
** MEG_Data_Acquisition|MEG
** Behavioral|Behavioral
* Lab Admin
** IRB|IRB
** ILABS_Brain_Seminar| ILABS Brain Seminar
* SEARCH
* TOOLBOX
* LANGUAGES
svqhfa2vwummez182hsxx5b0cmqtvcu
111
110
2015-11-02T21:27:48Z
Jyeatman
1
wikitext
text/x-wiki
* navigation
** Brain_Development_&_Education_Lab|Home
** recentchanges-url|recentchanges
** helppage|help
* Data Analysis
** Anatomy_Pipeline|Anatomy
** Diffusion_Pipeline|Diffusion
** MEG_Pipeline|MEG
** Cortical Thickness|Cortical_Thickness
* Data Acquisition
** MRI_Data_Acquisition|MRI
** MEG_Data_Acquisition|MEG
** Behavioral|Behavioral
* Lab Admin
** IRB|IRB
** ILABS_Brain_Seminar| ILABS Brain Seminar
* SEARCH
* TOOLBOX
* LANGUAGES
304d0hiap7rifevgvd3mc1vnvmfr6gk
112
111
2015-11-02T21:28:08Z
Jyeatman
1
wikitext
text/x-wiki
* navigation
** Brain_Development_&_Education_Lab|Home
** recentchanges-url|recentchanges
** helppage|help
* Data Analysis
** Anatomy_Pipeline|Anatomy
** Diffusion_Pipeline|Diffusion
** MEG_Pipeline|MEG
** Cortical_Thickness|Cortical Thickness
* Data Acquisition
** MRI_Data_Acquisition|MRI
** MEG_Data_Acquisition|MEG
** Behavioral|Behavioral
* Lab Admin
** IRB|IRB
** ILABS_Brain_Seminar| ILABS Brain Seminar
* SEARCH
* TOOLBOX
* LANGUAGES
8369tcs23cb3ypqrqwzlv6ojx33djmk
125
112
2015-11-03T01:30:16Z
Jyeatman
1
wikitext
text/x-wiki
* navigation
** Brain_Development_&_Education_Lab|Home
** recentchanges-url|recentchanges
** helppage|help
* Data Analysis
** Software_Setup|Software Setup
** Anatomy_Pipeline|Anatomy
** Diffusion_Pipeline|Diffusion
** MEG_Pipeline|MEG
** Cortical_Thickness|Cortical Thickness
* Data Acquisition
** MRI_Data_Acquisition|MRI
** MEG_Data_Acquisition|MEG
** Behavioral|Behavioral
* Lab Admin
** IRB|IRB
** ILABS_Brain_Seminar| ILABS Brain Seminar
* SEARCH
* TOOLBOX
* LANGUAGES
pdlmmc5evxkvq3jxp8s74bfqfb93lnk
134
125
2015-11-17T16:38:06Z
Jyeatman
1
wikitext
text/x-wiki
* navigation
** Brain_Development_&_Education_Lab|Home
** recentchanges-url|recentchanges
** helppage|help
* Data Analysis
** Software_Setup|Software Setup
** Anatomy_Pipeline|Anatomy
** Diffusion_Pipeline|Diffusion
** MEG_Pipeline|MEG
** Cortical_Thickness|Cortical Thickness
* Data Acquisition
** MRI_Data_Acquisition|MRI
** MEG_Data_Acquisition|MEG
** Behavioral|Behavioral
**Psychophysics|Psychophysics
* Lab Admin
** IRB|IRB
** ILABS_Brain_Seminar| ILABS Brain Seminar
* SEARCH
* TOOLBOX
* LANGUAGES
5kzau1w2evvqkbmycsgqve9gpjwqbe3
181
134
2015-12-02T00:29:56Z
Jyeatman
1
wikitext
text/x-wiki
* navigation
** Brain_Development_&_Education_Lab|Home
** recentchanges-url|recentchanges
** helppage|help
* Data Analysis
** Software_Setup|Software Setup
** Anatomy_Pipeline|Anatomy
** Diffusion_Pipeline|Diffusion
** MEG_Pipeline|MEG
** Cortical_Thickness|Cortical Thickness
* Data Acquisition
** MRI_Data_Acquisition|MRI
** MEG_Data_Acquisition|MEG
** Behavioral|Behavioral
**Psychophysics|Psychophysics
* Lab Admin
** IRB|IRB
** ILABS_Brain_Seminar| ILABS Brain Seminar
* Useful Resources
** Reading_Instruction|Reading instruction/intervention
* SEARCH
* TOOLBOX
* LANGUAGES
aclqn9qjh8xc106mhxn6kd8d2xequ7v
190
181
2015-12-03T21:42:09Z
Pdonnelly
2
wikitext
text/x-wiki
* navigation
** Brain_Development_&_Education_Lab|Home
** recentchanges-url|recentchanges
** helppage|help
* Data Analysis
** Software_Setup|Software Setup
** Anatomy_Pipeline|Anatomy
** Diffusion_Pipeline|Diffusion
** MEG_Pipeline|MEG
** Cortical_Thickness|Cortical Thickness
* Data Acquisition
** Data_Organization|Data Organization
** MRI_Data_Acquisition|MRI
** MEG_Data_Acquisition|MEG
** Behavioral|Behavioral
**Psychophysics|Psychophysics
* Lab Admin
** IRB|IRB
** ILABS_Brain_Seminar| ILABS Brain Seminar
* Useful Resources
** Reading_Instruction|Reading instruction/intervention
* SEARCH
* TOOLBOX
* LANGUAGES
a6chb3quk89l3mty3ufnb0qwu4i4nlx
216
190
2016-01-08T20:48:25Z
Pdonnelly
2
wikitext
text/x-wiki
* navigation
** Brain_Development_&_Education_Lab|Home
** recentchanges-url|recentchanges
** helppage|help
* Data Analysis
** Software_Setup|Software Setup
** Anatomy_Pipeline|Anatomy
** Diffusion_Pipeline|Diffusion
** MEG_Pipeline|MEG
** Cortical_Thickness|Cortical Thickness
* Data Acquisition
** Data_Organization|Data Organization
** MRI_Data_Acquisition|MRI
** fMRI_Data_Acquisition|fMRI
** MEG_Data_Acquisition|MEG
** Behavioral|Behavioral
**Psychophysics|Psychophysics
* Lab Admin
** IRB|IRB
** ILABS_Brain_Seminar| ILABS Brain Seminar
* Useful Resources
** Reading_Instruction|Reading instruction/intervention
* SEARCH
* TOOLBOX
* LANGUAGES
3h8bjgjk4zqep01ukpsy1mooe66j4z3
224
216
2016-01-14T18:24:02Z
Jyeatman
1
wikitext
text/x-wiki
* navigation
** Brain_Development_&_Education_Lab|Home
** recentchanges-url|recentchanges
** helppage|help
* Data Analysis
** Software_Setup|Software Setup
** Anatomy_Pipeline|Anatomy
** Diffusion_Pipeline|Diffusion
** MEG_Pipeline|MEG
** Cortical_Thickness|Cortical Thickness
** fMRI|fMRI
* Data Acquisition
** Data_Organization|Data Organization
** MRI_Data_Acquisition|MRI
** fMRI_Data_Acquisition|fMRI
** MEG_Data_Acquisition|MEG
** Behavioral|Behavioral
**Psychophysics|Psychophysics
* Lab Admin
** IRB|IRB
** ILABS_Brain_Seminar| ILABS Brain Seminar
* Useful Resources
** Reading_Instruction|Reading instruction/intervention
* SEARCH
* TOOLBOX
* LANGUAGES
o4spguenxkxac9an9tw3kx16ziw442f
237
224
2016-02-18T23:50:51Z
Pdonnelly
2
wikitext
text/x-wiki
* navigation
** Brain_Development_&_Education_Lab|Home
** recentchanges-url|recentchanges
** helppage|help
* Data Analysis
** Software_Setup|Software Setup
** Anatomy_Pipeline|Anatomy
** Diffusion_Pipeline|Diffusion
** MEG_Pipeline|MEG
** Cortical_Thickness|Cortical Thickness
** fMRI|fMRI
* Data Acquisition
** Data_Organization|Data Organization
** MRI_Data_Acquisition|MRI
** fMRI_Data_Acquisition|fMRI
** MEG_Data_Acquisition|MEG
** Behavioral|Behavioral
**Psychophysics|Psychophysics
* Lab Admin
** IRB|IRB
** ILABS_Brain_Seminar| ILABS Brain Seminar
* Useful Resources
** Reading_Instruction|Reading instruction/intervention
** Friends_Affiliates|Friends & Affiliates
* SEARCH
* TOOLBOX
* LANGUAGES
llf5eaebpcxy5bpwlh5gx5zmczj1cyr
242
237
2016-02-18T23:56:58Z
Pdonnelly
2
wikitext
text/x-wiki
* navigation
** Brain_Development_&_Education_Lab|Home
** recentchanges-url|recentchanges
** helppage|help
* Data Analysis
** Software_Setup|Software Setup
** Anatomy_Pipeline|Anatomy
** Diffusion_Pipeline|Diffusion
** MEG_Pipeline|MEG
** Cortical_Thickness|Cortical Thickness
** fMRI|fMRI
* Data Acquisition
** Data_Organization|Data Organization
** MRI_Data_Acquisition|MRI
** fMRI_Data_Acquisition|fMRI
** MEG_Data_Acquisition|MEG
** Behavioral|Behavioral
**Psychophysics|Psychophysics
* Lab Admin
** IRB|IRB
** ILABS_Brain_Seminar| ILABS Brain Seminar
* Useful Resources
** Reading_Instruction|Reading instruction/intervention
** Friends & Affiliates|Friends & Affiliates
* SEARCH
* TOOLBOX
* LANGUAGES
io49kjiuhdp2d6vbd6os8wzenockavu
243
242
2016-02-18T23:57:41Z
Pdonnelly
2
wikitext
text/x-wiki
* navigation
** Brain_Development_&_Education_Lab|Home
** recentchanges-url|recentchanges
** helppage|help
* Data Analysis
** Software_Setup|Software Setup
** Anatomy_Pipeline|Anatomy
** Diffusion_Pipeline|Diffusion
** MEG_Pipeline|MEG
** Cortical_Thickness|Cortical Thickness
** fMRI|fMRI
* Data Acquisition
** Data_Organization|Data Organization
** MRI_Data_Acquisition|MRI
** fMRI_Data_Acquisition|fMRI
** MEG_Data_Acquisition|MEG
** Behavioral|Behavioral
**Psychophysics|Psychophysics
* Lab Admin
** IRB|IRB
** ILABS_Brain_Seminar| ILABS Brain Seminar
* Useful Resources
** Reading_Instruction|Reading instruction/intervention
** Friends_Affiliates|Friends & Affiliates
* SEARCH
* TOOLBOX
* LANGUAGES
llf5eaebpcxy5bpwlh5gx5zmczj1cyr
244
243
2016-02-18T23:58:08Z
Pdonnelly
2
wikitext
text/x-wiki
* navigation
** Brain_Development_&_Education_Lab|Home
** recentchanges-url|recentchanges
** helppage|help
* Data Analysis
** Software_Setup|Software Setup
** Anatomy_Pipeline|Anatomy
** Diffusion_Pipeline|Diffusion
** MEG_Pipeline|MEG
** Cortical_Thickness|Cortical Thickness
** fMRI|fMRI
* Data Acquisition
** Data_Organization|Data Organization
** MRI_Data_Acquisition|MRI
** fMRI_Data_Acquisition|fMRI
** MEG_Data_Acquisition|MEG
** Behavioral|Behavioral
**Psychophysics|Psychophysics
* Lab Admin
** IRB|IRB
** ILABS_Brain_Seminar| ILABS Brain Seminar
* Useful Resources
** Reading_Instruction|Reading instruction/intervention
** Friends & Affiliates|Friends & Affiliates
* SEARCH
* TOOLBOX
* LANGUAGES
io49kjiuhdp2d6vbd6os8wzenockavu
262
244
2016-03-24T22:35:47Z
Dstrodtman
5
wikitext
text/x-wiki
* navigation
** Brain_Development_&_Education_Lab|Home
** recentchanges-url|recentchanges
** helppage|help
* Data Analysis
** Software_Setup|Software Setup
** Anatomy_Pipeline|Anatomy
** Diffusion_Pipeline|Diffusion
** MEG_Pipeline|MEG
** Cortical_Thickness|Cortical Thickness
** fMRI|fMRI
* Data Acquisition
** Data_Organization|Data Organization
** MRI_Data_Acquisition|MRI
** fMRI_Data_Acquisition|fMRI
** MEG_Data_Acquisition|MEG
** Behavioral|Behavioral
**Psychophysics|Psychophysics
* Human Connectome Project
** Accessing Data
** Processing Data
** Data Organization
* Lab Admin
** IRB|IRB
** ILABS_Brain_Seminar| ILABS Brain Seminar
* Useful Resources
** Reading_Instruction|Reading instruction/intervention
** Friends & Affiliates|Friends & Affiliates
* SEARCH
* TOOLBOX
* LANGUAGES
bcr64ysym8g9snudnoohi5516n9f4wr
263
262
2016-03-24T22:37:22Z
Dstrodtman
5
wikitext
text/x-wiki
* navigation
** Brain_Development_&_Education_Lab|Home
** recentchanges-url|recentchanges
** helppage|help
* Data Analysis
** Software_Setup|Software Setup
** Anatomy_Pipeline|Anatomy
** Diffusion_Pipeline|Diffusion
** MEG_Pipeline|MEG
** Cortical_Thickness|Cortical Thickness
** fMRI|fMRI
* Data Acquisition
** Data_Organization|Data Organization
** MRI_Data_Acquisition|MRI
** fMRI_Data_Acquisition|fMRI
** MEG_Data_Acquisition|MEG
** Behavioral|Behavioral
**Psychophysics|Psychophysics
* Human Connectome Project
** Accessing Data|HCP_Access
** Processing Data|HCP_Process
** Data Organization|HCP_Organized
* Lab Admin
** IRB|IRB
** ILABS_Brain_Seminar| ILABS Brain Seminar
* Useful Resources
** Reading_Instruction|Reading instruction/intervention
** Friends & Affiliates|Friends & Affiliates
* SEARCH
* TOOLBOX
* LANGUAGES
qs2mfxj8wxvy5bfw72bkpmklc3i561t
266
263
2016-03-24T22:41:03Z
Dstrodtman
5
wikitext
text/x-wiki
* navigation
** Brain_Development_&_Education_Lab|Home
** recentchanges-url|recentchanges
** helppage|help
* Data Analysis
** Software_Setup|Software Setup
** Anatomy_Pipeline|Anatomy
** Diffusion_Pipeline|Diffusion
** MEG_Pipeline|MEG
** Cortical_Thickness|Cortical Thickness
** fMRI|fMRI
* Data Acquisition
** Data_Organization|Data Organization
** MRI_Data_Acquisition|MRI
** fMRI_Data_Acquisition|fMRI
** MEG_Data_Acquisition|MEG
** Behavioral|Behavioral
**Psychophysics|Psychophysics
* Human Connectome Project
** HCP_Access|Accessing Data
** HCP_Process|Processing Data
** HCP_Organized|Data Organization
* Lab Admin
** IRB|IRB
** ILABS_Brain_Seminar| ILABS Brain Seminar
* Useful Resources
** Reading_Instruction|Reading instruction/intervention
** Friends & Affiliates|Friends & Affiliates
* SEARCH
* TOOLBOX
* LANGUAGES
q0611igaqtzem5qr2igxkt1zp0ucndz
272
266
2016-03-24T22:48:34Z
Dstrodtman
5
wikitext
text/x-wiki
* navigation
** Brain_Development_&_Education_Lab|Home
** recentchanges-url|recentchanges
** helppage|help
* Data Analysis
** Software_Setup|Software Setup
** Anatomy_Pipeline|Anatomy
** Diffusion_Pipeline|Diffusion
** MEG_Pipeline|MEG
** Cortical_Thickness|Cortical Thickness
** fMRI|fMRI
* Data Acquisition
** Data_Organization|Data Organization
** MRI_Data_Acquisition|MRI
** fMRI_Data_Acquisition|fMRI
** MEG_Data_Acquisition|MEG
** Behavioral|Behavioral
**Psychophysics|Psychophysics
* Human Connectome Project
** HCP_Access|Accessing Data
** HCP_Process|Processing Data
** HCP_Organization|Data Organization
* Lab Admin
** IRB|IRB
** ILABS_Brain_Seminar| ILABS Brain Seminar
* Useful Resources
** Reading_Instruction|Reading instruction/intervention
** Friends & Affiliates|Friends & Affiliates
* SEARCH
* TOOLBOX
* LANGUAGES
fwaeun18600sd0vbhq0n7nqxr9ej2m0
291
272
2016-06-13T16:53:55Z
Dstrodtman
5
wikitext
text/x-wiki
* navigation
** Brain_Development_&_Education_Lab|Home
** recentchanges-url|recentchanges
** helppage|help
* Data Analysis
** Software_Setup|Software Setup
** Anatomy_Pipeline|Anatomy
** Diffusion_Pipeline|Diffusion
** MEG_Pipeline|MEG
** Cortical_Thickness|Cortical Thickness
** fMRI|fMRI
* Data Acquisition
** Data_Organization|Data Organization
** MRI_Data_Acquisition|MRI
** fMRI_Data_Acquisition|fMRI
** MEG_Data_Acquisition|MEG
** Behavioral|Behavioral
**Psychophysics|Psychophysics
* Human Connectome Project
** HCP_Access|Accessing Data
** HCP_Process|Processing Data
** HCP_Organization|Data Organization
* Lab Admin
** System_Setup|Computer Setup
** wiki_tips|Editing This Wiki
** IRB|IRB
** ILABS_Brain_Seminar| ILABS Brain Seminar
* Useful Resources
** Reading_Instruction|Reading instruction/intervention
** Friends & Affiliates|Friends & Affiliates
* SEARCH
* TOOLBOX
* LANGUAGES
22t8vvbwrxp5bygp03sx3g5irzohrc6
312
291
2016-07-27T22:25:17Z
Dstrodtman
5
wikitext
text/x-wiki
* navigation
** Brain_Development_&_Education_Lab|Home
** recentchanges-url|recentchanges
** helppage|help
* Data Analysis
** Software_Setup|Software Setup
** Anatomy_Pipeline|Anatomy
** Diffusion_Pipeline|Diffusion
** MEG_Pipeline|MEG
** Cortical_Thickness|Cortical Thickness
** fMRI|fMRI
* Data Acquisition
** Data_Organization|Data Organization
** MRI_Data_Acquisition|MRI
** fMRI_Data_Acquisition|fMRI
** MEG_Data_Acquisition|MEG
** Behavioral|Behavioral
**Psychophysics|Psychophysics
* Human Connectome Project
** HCP_Access|Accessing Data
** HCP_Process|Processing Data
** HCP_Organization|Data Organization
* Lab Admin
** System_Setup|Computer Setup
** NFS
** wiki_tips|Editing This Wiki
** IRB|IRB
** ILABS_Brain_Seminar| ILABS Brain Seminar
* Useful Resources
** Reading_Instruction|Reading instruction/intervention
** Friends & Affiliates|Friends & Affiliates
* SEARCH
* TOOLBOX
* LANGUAGES
9dj9w6xgk9dqw7822xiv945sea4pgzc
313
312
2016-07-27T22:26:18Z
Dstrodtman
5
wikitext
text/x-wiki
* navigation
** Brain_Development_&_Education_Lab|Home
** recentchanges-url|recentchanges
** helppage|help
* Data Analysis
** Software_Setup|Software Setup
** Anatomy_Pipeline|Anatomy
** Diffusion_Pipeline|Diffusion
** MEG_Pipeline|MEG
** Cortical_Thickness|Cortical Thickness
** fMRI|fMRI
* Data Acquisition
** Data_Organization|Data Organization
** MRI_Data_Acquisition|MRI
** fMRI_Data_Acquisition|fMRI
** MEG_Data_Acquisition|MEG
** Behavioral|Behavioral
**Psychophysics|Psychophysics
* Human Connectome Project
** HCP_Access|Accessing Data
** HCP_Process|Processing Data
** HCP_Organization|Data Organization
* Lab Admin
** System_Setup|Computer Setup
** NFS|NFS
** wiki_tips|Editing This Wiki
** IRB|IRB
** ILABS_Brain_Seminar| ILABS Brain Seminar
* Useful Resources
** Reading_Instruction|Reading instruction/intervention
** Friends & Affiliates|Friends & Affiliates
* SEARCH
* TOOLBOX
* LANGUAGES
93n9mhm3pubgedbliu2tm9eoxwoif94